Amazon FSx for NetApp ONTAPにおけるSnapMirrorのTemperature Sensitive Storage Efficiency (TSSE) の挙動を確認してみた
SnapMirrorの転送先ボリュームで転送完了後に追加の重複排除や圧縮などが効くのか気になる
こんにちは、のんピ(@non____97)です。
皆さんはSnapMirrorの転送先ボリュームで転送完了後に追加の重複排除や圧縮などが効くのか気になったことはありますか? 私はあります。
SnapMirrorを使うことで、転送元ボリュームのStorage Efficiencyによる重複排除や圧縮などのデータ削減の効果を維持したまま転送することが可能です。
それでは、その転送先のボリュームで追加の重複排除や圧縮はできるのでしょうか。
KBには転送先と転送元がTSSEの場合は追加の重複排除や圧縮が効くとは明記されていません。
- SnapMirrorは、デスティネーションに追加の形式の圧縮が適用されないかぎり、ソースのストレージ効率(重複排除、圧縮、コンパクションなど)を維持します。ただし、温度対応のStorage Efficiencyは除きます
次の表に、ソースボリュームとデスティネーションボリュームのStorage Efficiencyの組み合わせと転送結果を示します
- 重複排除(D)-インラインまたはバックグラウンド/ポストプロセス
- アダプティブ圧縮(CA)-インラインまたはポストプロセスです
- 二次圧縮(cs)-インラインまたはポストプロセスです
- 温度依存Storage Efficiency(TSSE)- ONTAP 9.8以降(AFF プラットフォーム)
追加の重複排除や圧縮ができるのであれば、以下記事で紹介しているように「重複排除を効かせた状態で移行をしたい。ただ、ほとんどのデータをキャパシティプールストレージに持っていきたい」という要望がある場合において役立ちます。
具体的には、現行ファイルサーバーがONTAPである時、現行ファイルサーバーから1台目のAmazon FSx for NetApp ONTAP(以降FSxN)へSnapMirrorを使うことが可能になります。もし、追加の重複排除や圧縮が効かない場合はいくら1台目のFSxNで寝かせていても時間の無駄となってしまいます。
実際にSnapMirrorの転送先ボリュームで追加の重複排除や圧縮が効くのかどうか確認してみました。
なお、FSxNにおけるTSSEの仕様はなかなか安定していません。突如として叩けなかったコマンドが叩けるようになったり、見えなかった情報が見えるようになったりします。検証結果は2023/11/21時点のONTAP 9.13.1P5時点のものです。今後挙動が変わる可能性も十分にあるため、参考にする際はご注意ください。
::*> version NetApp Release 9.13.1P5: Thu Nov 02 20:37:09 UTC 2023
いきなりまとめ
- 転送先ボリュームがTSSEであっても、SnapMirrorの転送先ボリュームで転送完了後に追加の重複排除や圧縮を行うことは可能
- ただし、Snapshotによってデータブロックがロックされるため、物理的なデータ使用量は変化しない
- 物理的なデータ使用量を削減したい場合はSnapMirror relationshipを解除した上で、転送先ボリュームのSnapshotを削除する必要がある
- SnapMirrorで転送されたSnapshotにおいて、重複しているデータブロックがある場合は、転送完了後に重複排除の処理が動作し、即座に物理的なデータ使用量が削減される (SnapMirrorの初期転送時は動作しない)
- FSxNにおいてTSSEはデフォルトで有効
- ただし、ポストプロセスの圧縮(Inactive data compression)は無効となっている
- SnapMirrorの転送元ボリュームでTSSEが有効であれば、転送先ボリュームでTSSEを有効にしていなくともTSSEによるデータ削減は維持された状態で転送される
- ただし、転送先ボリュームでTSSEを有効化しなければ、手動でTSSEを実行することはできない
- 転送先のボリュームでTSSEを有効化するのはSnapMirrorの初期化が完了した後である必要がある
- SnapMirrorの転送先ボリュームのポストプロセスの圧縮(Inactive data compression)が有効かどうかは転送元ボリュームに依存する
- 転送先ボリュームで有効化していても、転送元ボリュームで無効化されている場合は、転送先ボリュームの Inactive data compression は無効になる
- 逆に転送先ボリュームで無効していても、転送元ボリュームで有効化されている場合は、転送先ボリュームの Inactive data compression は有効になる
- そのため、転送元ボリュームで Inactive data compression を有効にすることが望ましい
- 転送先ボリュームと転送元ボリュームの Inactive data compression の閾値はそれぞれ別の値を設定することが可能
- 2023/11/21時点の ONTAP 9.13.1P5 の FSxN の inactive-data-compression の実行間隔は24時間で固定
- 開始する時間はボリュームごとに異なる
- 確認した限り、ファイルシステムやSVM、ボリュームが作成完了した時間や Inactive data compression を有効化した時間とは関係がない
- Inactive data compression の閾値はデフォルト14日
- 設定できるミニマムは1日であるため、基本的に1日以上は転送されたデータを寝かせる必要がある
- 実行間隔である24時間を待てない場合はvolume efficiency inactive-data-compression startを実行すると、直ちに閾値で指定した日数経過したデータ圧縮処理が開始される
volume efficiency inactive-data-compression start
に-inactive-days 0
オプションを付与することで1日待たなくともデータ圧縮処理を開始することが可能- Tiering Policy Allのボリュームに対して
volume efficiency inactive-data-compression start
を叩くことはできない
volume efficiency start -scan-old-data
を実行しても、Inactive data compressionの処理は実行されない- 別々で実行する必要がある
- SnapMirrorの転送先ボリュームでは
volume efficiency start
を叩く際は-scan-old-data
のオプションを追加する必要がある
- SnapMirrorの転送先ボリュームを書き込み可能にしたタイミングで、Inactive data compressionの閾値はデフォルトのものになる
snapmirror break
前にInactive data compressionの閾値をカスタマイズしている場合は注意が必要
Temperature Sensitive Storage Efficiency (TSSE) とは
Temperature Sensitive Storage Efficiency (TSSE)とはボリュームのデータのアクセス頻度に応じて圧縮レベルを変更する仕組みです。
さらにONTAP 9.8ではTSSE (Temperature Sensitive Storage Efficiency)が搭載され、書き込まれた当初はパフォーマンスを重視して8KB単位でインライン圧縮し、しばらくアクセスがないと32KB単位に再圧縮することで、より圧縮率を上げることができるようになりました。
データが書き込まれた際(インライン)と、その後(ポストプロセス)で実行されるデータ削減処理が異なります。
- データ書き込み時は以下の順番で処理が行われる
- ゼロブロック重複排除
- インライン重複排除
- インライン圧縮
- コンパクション
- ポストプロセスの重複排除と圧縮は別々のタイミングで行われる
- ポストプロセス重複排除 : Changelog が閾値(デフォルト20%)を超えたタイミング
- ポストプロセス圧縮(Inactive data compression) : 設定した閾値(デフォルト14日)経過したデータブロックをCold Dataとして判定したタイミング
こちらのTSSEの詳細な説明は別の記事にて行う予定です。
やってみる
検証環境
検証環境は以下のとおりです。
FSxNファイルシステムを2つ用意して、FSxN1からFSxN2のボリュームにSnapMirrorします。
SnapMirrorの転送元ボリュームでテストファイル作成前の状態の確認
SnapMirrorの転送元ボリュームでテストファイルを作成する前にaggregateやボリュームの情報を確認しておきます。
なお、当たり前のように権限レベルをdiag
に設定していますがマネしないでください。この権限は非常に強い権限であるため、基本的にはサポートの指示を受けた時に使用するものです。今回はできるだけ低レイヤーの情報も欲しかったのでdiag
で作業しています。
# 権限レベル ::> set diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y # aggregate の情報の確認 ::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 860.6GB 0% online 2 FsxId0762660cbce raid0, 3713bf-01 mirrored, normal ::*> aggr show -instance Aggregate: aggr1 Storage Type: ssd Checksum Style: advanced_zoned Number Of Disks: 8 Is Mirrored: true Disks for First Plex: NET-2.3, NET-2.4, NET-2.5, NET-2.6 Disks for Mirrored Plex: NET-3.5, NET-3.3, NET-3.6, NET-3.4 Partitions for First Plex: - Partitions for Mirrored Plex: - Node: FsxId0762660cbce3713bf-01 Free Space Reallocation: off HA Policy: sfo Ignore Inconsistent: off Space Reserved for Snapshot Copies: 5% Aggregate Nearly Full Threshold Percent: 93% Aggregate Full Threshold Percent: 96% Checksum Verification: on RAID Lost Write: off Enable Thorough Scrub: - Hybrid Enabled: false Available Size: 860.6GB Checksum Enabled: true Checksum Status: active Cluster: FsxId0762660cbce3713bf Home Cluster ID: adb9f05c-851e-11ee-84de-4b7ecb818153 DR Home ID: - DR Home Name: - Inofile Version: 4 Has Mroot Volume: false Has Partner Node Mroot Volume: false Home ID: 3323134325 Home Name: FsxId0762660cbce3713bf-01 Total Hybrid Cache Size: 0B Hybrid: false Inconsistent: false Is Aggregate Home: true Max RAID Size: 4 Flash Pool SSD Tier Maximum RAID Group Size: - Owner ID: 3323134325 Owner Name: FsxId0762660cbce3713bf-01 Used Percentage: 0% Plexes: /aggr1/plex0, /aggr1/plex1 RAID Groups: /aggr1/plex0/rg0 (advanced_zoned) /aggr1/plex1/rg0 (advanced_zoned) RAID Lost Write State: off RAID Status: raid0, mirrored, normal RAID Type: raid0 SyncMirror Resync Snapshot Frequency in Minutes: 5 Is Root: false Space Used by Metadata for Volume Efficiency: 0B Size: 861.8GB State: online Maximum Write Alloc Blocks: 0 Used Size: 1.12GB Uses Shared Disks: false UUID String: 44857d47-851f-11ee-84de-4b7ecb818153 Number Of Volumes: 2 Is Flash Pool Caching: - Is Eligible for Auto Balance Aggregate: false State of the aggregate being balanced: ineligible Total Physical Used Size: 25.80MB Physical Used Percentage: 0% State Change Counter for Auto Balancer: 0 SnapLock Type: non-snaplock Is NVE Capable: false Is in the precommit phase of Copy-Free Transition: false Is a 7-Mode transitioning aggregate that is not yet committed in clustered Data ONTAP and is currently out of space: false Threshold When Aggregate Is Considered Unbalanced (%): 70 Threshold When Aggregate Is Considered Balanced (%): 40 Resynchronization Priority: low Space Saved by Data Compaction: 0B Percentage Saved by Data Compaction: 0% Amount of compacted data: 0B Timestamp of Aggregate Creation: 11/17/2023 07:59:51 Enable SIDL: off Composite: true Is FabricPool Mirrored: false Capacity Tier Used Size: 0B Space Saved by Storage Efficiency: 0B Percentage of Space Saved by Storage Efficiency: 0% Amount of Shared bytes count by Storage Efficiency: 0B Inactive Data Reporting Enabled: - Timestamp when Inactive Data Reporting was Enabled: - Enable Aggregate level Encryption: false Aggregate uses data protected SEDs: false azcs read optimization: on Metadata Reserve Space Required For Revert: 0B # aggregate 内のスペースの情報の確認 ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 1.12GB 0% Aggregate Metadata 2.57MB 0% Snapshot Reserve 45.36GB 5% Total Used 46.47GB 5% Total Physical Used 26.14MB 0% Total Provisioned Space 65GB 7% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> aggr show-space -instance Aggregate: aggr1 Bin Number: 0 Tier Name For Show Command: Performance Tier Aggregate Display Name: aggr1 Uuid of the Aggregate: 44857d47-851f-11ee-84de-4b7ecb818153 Volume Footprints: 1.12GB Volume Footprints Percent: 0% Total Space for Snapshot Copies in Bytes: 45.36GB Space Reserved for Snapshot Copies: 5% Aggregate Metadata: 2.59MB Aggregate Metadata Percent: 0% Total Used: 46.47GB Total Used Percent: 5% Size: 907.1GB Snapshot Reserve Unusable: - Snapshot Reserve Unusable Percent: - Total Physical Used Size: 26.50MB Physical Used Percentage: 0% Performance Tier Inactive User Data: 0B Performance Tier Inactive User Data Percent: 0% Aggregate Dedupe Metadata: - Aggregate Dedupe Metadata Percent: - Aggregate Dedupe Temporary Metadata: - Aggregate Dedupe Temporary Metadata Percent: - Total Space Provisioned inside Aggregate: 65GB Percentage Space Provisioned inside Aggregate: 7% Total Physical Used Size: - Physical Used Percentage: - Total Object Store Logical Referenced Capacity: - Object Store Logical Referenced Capacity Percentage: - (DEPRECATED)-Object Store Metadata: - (DEPRECATED)-Object Store Metadata Percent: - (DEPRECATED)-Total Unreclaimed Space: - (DEPRECATED)-Object Store Unreclaimed Space Percentage: - Object Store Size: - Object Store Space Saved by Storage Efficiency: - Object Store Space Saved by Storage Efficiency Percentage: - Total Logical Used Size: - Logical Used Percentage: - Logical Unreferenced Capacity: - Logical Unreferenced Percentage: - Aggregate: aggr1 Bin Number: 1 Tier Name For Show Command: Object Store: FSxFabricpoolObjectStore Aggregate Display Name: aggr1 Uuid of the Aggregate: 44857d47-851f-11ee-84de-4b7ecb818153 Volume Footprints: - Volume Footprints Percent: - Total Space for Snapshot Copies in Bytes: - Space Reserved for Snapshot Copies: - Aggregate Metadata: - Aggregate Metadata Percent: - Total Used: - Total Used Percent: - Size: - Snapshot Reserve Unusable: - Snapshot Reserve Unusable Percent: - Total Physical Used Size: - Physical Used Percentage: - Performance Tier Inactive User Data: - Performance Tier Inactive User Data Percent: - Aggregate Dedupe Metadata: - Aggregate Dedupe Metadata Percent: - Aggregate Dedupe Temporary Metadata: - Aggregate Dedupe Temporary Metadata Percent: - Total Space Provisioned inside Aggregate: - Percentage Space Provisioned inside Aggregate: - Total Physical Used Size: 0B Physical Used Percentage: - Total Object Store Logical Referenced Capacity: 0B Object Store Logical Referenced Capacity Percentage: - (DEPRECATED)-Object Store Metadata: - (DEPRECATED)-Object Store Metadata Percent: - (DEPRECATED)-Total Unreclaimed Space: - (DEPRECATED)-Object Store Unreclaimed Space Percentage: - Object Store Size: - Object Store Space Saved by Storage Efficiency: - Object Store Space Saved by Storage Efficiency Percentage: - Total Logical Used Size: 0B Logical Used Percentage: - Logical Unreferenced Capacity: 0B Logical Unreferenced Percentage: - 2 entries were displayed. # aggregate レベルのStorage Effieincyの確認 ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0762660cbce3713bf-01 Total Storage Efficiency Ratio: 1.00:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 1.00:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 1.00:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0762660cbce3713bf-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 120KB Total Physical Used: 2.62MB Total Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used Without Snapshots: 120KB Total Data Reduction Physical Used Without Snapshots: 2.62MB Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones: 120KB Total Data Reduction Physical Used without snapshots and flexclones: 2.62MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 600KB Total Physical Used in FabricPool Performance Tier: 3.26MB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 600KB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.26MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 120KB Physical Space Used for All Volumes: 120KB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 2.62MB Physical Space Used by the Aggregate: 2.62MB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 0B Physical Size Used by Snapshot Copies: 0B Snapshot Volume Data Reduction Ratio: 1.00:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 # ボリュームレベルのStorage Efficiencyの確認 ::*> volume efficiency show -instance Vserver Name: svm Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:04:16 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 13b698f2-8520-11ee-bfa4-f9928d2238a7 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Fri Nov 17 08:06:45 2023 Last Success Operation End: Fri Nov 17 08:06:45 2023 Last Operation Begin: Fri Nov 17 08:06:45 2023 Last Operation End: Fri Nov 17 08:06:45 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 300KB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: - Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 0 Duplicate Blocks Found: 0 Sorting Begin: - Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: - Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 0 Same FP Count: 0 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: - Number of indirect blocks skipped by compression phase: - Volume Has Extended Auto Adaptive Compression: true # Inactive data compression の状態の確認 ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: false Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
要点をざっとまとめると以下のとおりです。
- aggregateの空き容量は860.6GB
- 重複排除や圧縮されているデータは存在しない
- ボリュームのStorage Efficiency Modeが
efficient
であることからTSSEは有効 - 以下の処理は有効
- インライン圧縮
- インライン重複排除
- ポストプロセス重複排除
- Inactive data compressionは無効
TSSEのキモであるInactive data compressionはデフォルトで無効のようです。そのためポストプロセスの圧縮は動作しません。TSSEの本来の挙動をして欲しい場合は手動でInactive data compressionを有効にしましょう。
テスト用ファイルの作成
テスト用ファイルを作成します。
# ボリュームのマウント $ sudo mount -t nfs svm-04855fdf5ed7737a8.fs-0762660cbce3713bf.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1 # ボリュームがマウントされたか確認 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-04855fdf5ed7737a8.fs-0762660cbce3713bf.fsx.us-east-1.amazonaws.com:/vol1 nfs4 61G 320K 61G 1% /mnt/fsxn/vol1 # 5GiBのテスト用ファイルの作成 $ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/test_file_1 bs=1M count=5120 5120+0 records in 5120+0 records out 5368709120 bytes (5.4 GB, 5.0 GiB) copied, 36.2132 s, 148 MB/s # テスト用ファイル作成後のボリュームの使用量の確認 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-04855fdf5ed7737a8.fs-0762660cbce3713bf.fsx.us-east-1.amazonaws.com:/vol1 nfs4 61G 5.1G 56G 9% /mnt/fsxn/vol1
テスト用ファイル作成後のaggregateやボリュームの情報を確認します。
# aggregate の情報の確認 FsxId0e65b3d07f71f905d::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 859.5GB 0% online 2 FsxId0762660cbce raid0, 3713bf-01 mirrored, normal # aggregate 内のスペースの情報の確認 ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 1.21GB 0% Aggregate Metadata 42.68MB 0% Snapshot Reserve 45.36GB 5% Total Used 46.61GB 5% Total Physical Used 5.24GB 1% Total Provisioned Space 65GB 7% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 5.04GB - Logical Referenced Capacity 5.02GB - Logical Unreferenced Capacity 24.96MB - Total Physical Used 5.04GB - 2 entries were displayed. # ボリュームのフットプリントの確認 ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 5.07GB 1% Footprint in Performance Tier 99.50MB 2% Footprint in FSxFabricpoolObjectStore 5GB 98% Volume Guarantee 0B 0% Flexible Volume Metadata 107.5MB 0% Delayed Frees 26.94MB 0% File Operation Metadata 4KB 0% Total Footprint 5.20GB 1% Effective Total Footprint 5.20GB 1% # aggregate レベルのStorage Effieincyの確認 ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0762660cbce3713bf-01 Total Storage Efficiency Ratio: 1.00:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 1.00:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 1.00:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0762660cbce3713bf-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 5.02GB Total Physical Used: 5.08GB Total Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used Without Snapshots: 5.02GB Total Data Reduction Physical Used Without Snapshots: 5.08GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones: 5.02GB Total Data Reduction Physical Used without snapshots and flexclones: 5.08GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 99.28MB Total Physical Used in FabricPool Performance Tier: 117.3MB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 99.28MB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 117.3MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 5.02GB Physical Space Used for All Volumes: 5.02GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 5.08GB Physical Space Used by the Aggregate: 5.08GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 0B Physical Size Used by Snapshot Copies: 0B Snapshot Volume Data Reduction Ratio: 1.00:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 # ボリュームレベルのStorage Efficiencyの確認 ::*> volume efficiency show -instance Vserver Name: svm Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:09:22 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 13b698f2-8520-11ee-bfa4-f9928d2238a7 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Fri Nov 17 08:06:45 2023 Last Success Operation End: Fri Nov 17 08:06:45 2023 Last Operation Begin: Fri Nov 17 08:06:45 2023 Last Operation End: Fri Nov 17 08:06:45 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: - Changelog Usage: 7% Changelog Size: 50MB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 5.07GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: - Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 0 Duplicate Blocks Found: 0 Sorting Begin: - Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: - Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 0 Same FP Count: 0 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: - Number of indirect blocks skipped by compression phase: - Volume Has Extended Auto Adaptive Compression: true # Inactive data compression の状態の確認 ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: false Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
要点は以下のとおりです。
- aggregateの空き容量は859.5GBとわずかに減少
- Tiering Policy All であるため、SSD(Performance Tier)の使用量は1.21GBしか増えていない
- 代わりにキャパシティプールストレージ(FSxFabricpoolObjectStore)の使用量は5GB増えている
- 重複排除や圧縮されているデータは存在しない
- Last Operation Beginの時刻が変動していないことから、重複排除や圧縮などTSSEの処理自体が行われていない
- Inactive data compressionは動作していない
重複排除や圧縮を効かせずにキャパシティプールストレージにデータを移動することができました。
次に、作成したテスト用ファイルを9個コピーします。
コピーする際は一つひとつキャパシティプールストレージに流し切ったことを確認してコピーするのではなく、4多重で並列に行います。
$ seq 2 10 \ | xargs -i \ -P 4 \ bash -c "sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_{}" $ ls -l /mnt/fsxn/vol1 total 52635320 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_1 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_10 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_2 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_3 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_4 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_5 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_6 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_7 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_8 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_9 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-04855fdf5ed7737a8.fs-0762660cbce3713bf.fsx.us-east-1.amazonaws.com:/vol1 nfs4 61G 30G 31G 50% /mnt/fsxn/vol1
ファイルをコピーした後のデータ量を確認します。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 859.5GB 0% online 2 FsxId0762660cbce raid0, 3713bf-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 2.05GB 0% Aggregate Metadata 255.3MB 0% Snapshot Reserve 45.36GB 5% Total Used 47.65GB 5% Total Physical Used 25.21GB 3% Total Provisioned Space 65GB 7% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 29.59GB - Logical Referenced Capacity 29.45GB - Logical Unreferenced Capacity 143.9MB - Total Physical Used 29.59GB - 2 entries were displayed. ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 29.87GB 3% Footprint in Performance Tier 817.4MB 3% Footprint in FSxFabricpoolObjectStore 29.34GB 97% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 30.07MB 0% Deduplication 30.07MB 0% Delayed Frees 268.8MB 0% File Operation Metadata 4KB 0% Total Footprint 30.38GB 3% Effective Total Footprint 30.38GB 3% ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0762660cbce3713bf-01 Total Storage Efficiency Ratio: 1.16:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 1.16:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 1.16:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0762660cbce3713bf-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 50.28GB Total Physical Used: 43.33GB Total Storage Efficiency Ratio: 1.16:1 Total Data Reduction Logical Used Without Snapshots: 50.28GB Total Data Reduction Physical Used Without Snapshots: 43.33GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.16:1 Total Data Reduction Logical Used without snapshots and flexclones: 50.28GB Total Data Reduction Physical Used without snapshots and flexclones: 43.33GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.16:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 1.34GB Total Physical Used in FabricPool Performance Tier: 14.27GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.34GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 14.27GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 50.28GB Physical Space Used for All Volumes: 29.62GB Space Saved by Volume Deduplication: 20.66GB Space Saved by Volume Deduplication and pattern detection: 20.66GB Volume Deduplication Savings ratio: 1.70:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.70:1 Logical Space Used by the Aggregate: 43.33GB Physical Space Used by the Aggregate: 43.33GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 0B Physical Size Used by Snapshot Copies: 0B Snapshot Volume Data Reduction Ratio: 1.00:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -instance Vserver Name: svm Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:01:31 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 13b698f2-8520-11ee-bfa4-f9928d2238a7 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Fri Nov 17 08:20:46 2023 Last Success Operation End: Fri Nov 17 08:27:46 2023 Last Operation Begin: Fri Nov 17 08:20:46 2023 Last Operation End: Fri Nov 17 08:27:46 2023 Last Operation Size: 6.67GB Last Operation Error: - Operation Frequency: Once approxmiately every 22 min(s) and 32 sec(s) Changelog Usage: 34% Changelog Size: 226.6MB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 50.54GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 1749372 Duplicate Blocks Found: 438652 Sorting Begin: Fri Nov 17 08:20:50 UTC 2023 Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: Fri Nov 17 08:24:49 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 1749372 Same FP Count: 438652 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 438652 Stale Donor Count: 438652 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: - Number of indirect blocks skipped by compression phase: - Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: false Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: lzopro Failure Reason: Inactive data compression disabled on volume Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 0 30.92GB 64GB 60.80GB 29.87GB 49% 20.66GB 41% 20.66GB 41% 9.43GB 0B 0% 50.54GB 83% - 50.54GB - -
要点は以下のとおりです。
- aggregateの空き容量は859.5GBと変わらず
- 5GiBのファイルを9つコピーしたため45GiB増えている論理的なサイズは50GBになっている
- Tiering Policy All であるため、SSD(Performance Tier)の使用量は255.3MB
- 代わりにキャパシティプールストレージ(FSxFabricpoolObjectStore)の使用量は29.59GBに増えている
- 重複排除によって20.66GBデータが削減されている
- 圧縮によるデータ削減は発生していない
- Inactive data compressionのステータスがFAILUREとなっている
- ステータスが無効であるため、動作していないだけで処理の開始を行おうとしたと想定
Tiering Policy AllのボリュームでTSSEを手動実行
次に手動でTSSEを動作させます。
ほとんどのデータがキャパシティプールストレージです。TSSEはFabricPoolのローカル(FSx for ONTAPでいうところのプライマリストレージ)のみサポートしています。
Temperature-sensitive storage efficiency
Beginning in ONTAP 9.8, temperature-sensitive storage efficiency (TSSE) is available. TSSE uses temperature scans to determine how hot or cold data is and compresses larger or smaller blocks of data accordingly — making storage efficiency more efficient.
Beginning in ONTAP 9.10.1, TSSE is supported on volumes located on FabricPool-enabled local tiers (storage aggregates). TSSE compression-based storage efficiencies are preserved when tiering to cloud tiers.
試しに、既存のデータに重複排除や圧縮などが効くようにvolume efficiency startの-scan-old-data
を指定した上でコマンドを叩きます。
ドキュメント通りならデータ削減量は変わらない想定です。
せっかくなので、Inactive data compressionが連動して動作するかもチェックします。
事前準備としてInactive data compressionを有効にします。
# 指定できるオプションの確認 ::*> volume efficiency inactive-data-compression modify ? [ -vserver <vserver name> ] *Vserver Name (default: svm) [-volume] <volume name> *Volume Name [[-progress] <text>] *Progress [ -status <text> ] *Status [ -failure-reason <text> ] *Failure Reason [ -total-blocks <integer> ] *Total Blocks to be Processed [ -total-processed <integer> ] *Total Blocks Processed [ -percentage <percent> ] *Progress [ -is-enabled {true|false} ] *State of Inactive Data Compression on the Volume [ -threshold-days <integer> ] *Inactive data compression scan threshold days value [ -threshold-days-min <integer> ] *Inactive data compression scan threshold minimum allowed value. [ -threshold-days-max <integer> ] *Inactive data compression scan threshold maximum allowed value. [ -read-history-window-size <integer> ] *Time window(in days) for which client reads data is collected for tuning. [ -tuning-enabled {true|false} ] *State of auto-tuning of Inactive data compression scan on volume. [ -compression-algorithm {lzopro|zstd} ] *Inactive data compression algorithm # Inactive data compressionの有効化 ::*> volume efficiency inactive-data-compression modify -volume vol1 -is-enabled true # Inactive data compressionの状態の確認 ::*> volume efficiency inactive-data-compression show -volume vol1 Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 true - IDLE FAILURE lzopro ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: lzopro Failure Reason: Inactive data compression disabled on volume Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
Inactive data compressionが有効になりましたね。
compressionとして有効になっているか確認します。
::*> volume efficiency show -instance Vserver Name: svm Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:11:23 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 13b698f2-8520-11ee-bfa4-f9928d2238a7 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Fri Nov 17 08:44:02 2023 Last Success Operation End: Fri Nov 17 09:00:16 2023 Last Operation Begin: Fri Nov 17 08:44:02 2023 Last Operation End: Fri Nov 17 09:00:16 2023 Last Operation Size: 22.67GB Last Operation Error: - Operation Frequency: Once approxmiately every 32 min(s) and 27 sec(s) Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 50.76GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 7252241 Duplicate Blocks Found: 5941521 Sorting Begin: Fri Nov 17 08:44:02 UTC 2023 Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: Fri Nov 17 08:44:11 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7252241 Same FP Count: 5941521 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 5941521 Stale Donor Count: 5941521 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: - Number of indirect blocks skipped by compression phase: - Volume Has Extended Auto Adaptive Compression: true
compressionはfalseのままでした。Inactive data compressionとcompressionは連動していなさそうです。
それでは手動でTSSEを実行します。
# 圧縮、重複排除、コンパクションの実行と対象を共有ブロック、Snapshotに含めて実行 # Auto adaptive compressionが有効な場合は -compression がサポートされていないことを確認 ::*> volume efficiency start -volume vol1 -scan-old-data -compression -dedupe -compaction -shared-blocks -snapshot-blocks Warning: This operation scans all of the data in volume "vol1" of Vserver "svm". It might take a significant time, and degrade performance during that time. Use of "-shared-blocks|-a" option can increase space usage as shared blocks will be compressed. Use of "-snapshot-blocks|-b" option can increase space usage as data which is part of Snapshot will be compressed. Do you want to continue? {y|n}: y Error: command failed: Failed to start efficiency on volume "vol1" of Vserver "svm": "-compression" option is not supported on auto-adaptive compression enabled volumes. # 重複排除、コンパクションの実行と対象を共有ブロック、Snapshotに含めて実行 # compression が有効になっていない環境においては対象を共有ブロック、Snapshotに含めて実行することはできない # 2023/11/22時点のFSxNでは compression を有効化することはできないため、-shared-blocks -snapshot-blocks を指定することはできない ::*> volume efficiency start -volume vol1 -scan-old-data -shared-blocks -snapshot-blocks Warning: This operation scans all of the data in volume "vol1" of Vserver "svm". It might take a significant time, and degrade performance during that time. Use of "-shared-blocks|-a" option can increase space usage as shared blocks will be compressed. Use of "-snapshot-blocks|-b" option can increase space usage as data which is part of Snapshot will be compressed. Do you want to continue? {y|n}: y Error: command failed: Failed to start efficiency on volume "vol1" of Vserver "svm": Compression is not enabled. Use of options -a and -b is not valid. # 追加のオプションを指定せずに実行 ::*> volume efficiency start -volume vol1 -scan-old-data Warning: This operation scans all of the data in volume "vol1" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y Error: command failed: Failed to start efficiency on volume "vol1" of Vserver "svm": Another sis operation is currently active. # TSSEの処理が開始されていることを確認 ::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Active 21437776 KB (90%) Done auto ::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Active 21782932 KB (91%) Done auto ::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Active 23726608 KB (99%) Done auto ::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Idle Idle for 00:00:21 auto
手動でTSSE実行後、データ削減量が変化しているか確認します。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 859.4GB 0% online 2 FsxId0762660cbce raid0, 3713bf-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 2.05GB 0% Aggregate Metadata 268.4MB 0% Snapshot Reserve 45.36GB 5% Total Used 47.67GB 5% Total Physical Used 8.58GB 1% Total Provisioned Space 65GB 7% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 29.59GB - Logical Referenced Capacity 29.45GB - Logical Unreferenced Capacity 143.9MB - Total Physical Used 29.59GB - 2 entries were displayed. ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 29.88GB 3% Footprint in Performance Tier 824.5MB 3% Footprint in FSxFabricpoolObjectStore 29.34GB 97% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 30.07MB 0% Deduplication 30.07MB 0% Delayed Frees 273.5MB 0% File Operation Metadata 4KB 0% Total Footprint 30.38GB 3% Effective Total Footprint 30.38GB 3% ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0762660cbce3713bf-01 Total Storage Efficiency Ratio: 1.66:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 1.66:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 1.66:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0762660cbce3713bf-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 50.51GB Total Physical Used: 30.39GB Total Storage Efficiency Ratio: 1.66:1 Total Data Reduction Logical Used Without Snapshots: 50.51GB Total Data Reduction Physical Used Without Snapshots: 30.39GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.66:1 Total Data Reduction Logical Used without snapshots and flexclones: 50.51GB Total Data Reduction Physical Used without snapshots and flexclones: 30.39GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.66:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 1.36GB Total Physical Used in FabricPool Performance Tier: 1.33GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.02:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.36GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.33GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.02:1 Logical Space Used for All Volumes: 50.51GB Physical Space Used for All Volumes: 29.62GB Space Saved by Volume Deduplication: 20.88GB Space Saved by Volume Deduplication and pattern detection: 20.88GB Volume Deduplication Savings ratio: 1.70:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.70:1 Logical Space Used by the Aggregate: 30.39GB Physical Space Used by the Aggregate: 30.39GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 0B Physical Size Used by Snapshot Copies: 0B Snapshot Volume Data Reduction Ratio: 1.00:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -instance Vserver Name: svm Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:03:54 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 13b698f2-8520-11ee-bfa4-f9928d2238a7 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Fri Nov 17 08:44:02 2023 Last Success Operation End: Fri Nov 17 09:00:16 2023 Last Operation Begin: Fri Nov 17 08:44:02 2023 Last Operation End: Fri Nov 17 09:00:16 2023 Last Operation Size: 22.67GB Last Operation Error: - Operation Frequency: Once approxmiately every 28 min(s) and 42 sec(s) Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 50.76GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 7252241 Duplicate Blocks Found: 5941521 Sorting Begin: Fri Nov 17 08:44:02 UTC 2023 Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: Fri Nov 17 08:44:11 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7252241 Same FP Count: 5941521 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 5941521 Stale Donor Count: 5941521 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: - Number of indirect blocks skipped by compression phase: - Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: false Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: lzopro Failure Reason: Inactive data compression disabled on volume Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 0 30.92GB 64GB 60.80GB 29.88GB 49% 20.88GB 41% 20.88GB 41% 9.65GB 0B 0% 50.76GB 83% - 50.76GB - -
Change logが37%と20%を超えていたからか、既にTSSEの処理が走っていました。
実行結果を確認しても重複排除や圧縮の効果は変わりありませんでした。念の為もう一度volume efficiency start -volume vol1 -scan-old-data
を叩きましたが、結果は変わりありませんでした。
ということで、やはりキャパシティープールストレージのデータブロックに対して追加のデータ削減効果を得ることはできないようです。
手動でInactive data compression
手動でInactive data compressionもしてみましょう。
::*> volume efficiency inactive-data-compression start -volume vol1 Error: command failed: Failed to start inactive data compression scan on volume "vol1" in Vserver "svm". Reason: "CA tiering Policy is all"
「Tiering Policy AllであるためInactive data compressionのスキャンは開始に失敗した」とエラーになりました。
SnapMirror relationshipの作成
SnapMirror relationshipの作成をします。
SnapMirrorの詳細な説明は以下記事をご覧ください。
まず、クラスターピアリングをします。
その前に各リージョンのLIFのIPアドレスを確認します。
::> cluster identity show Cluster UUID: adb9f05c-851e-11ee-84de-4b7ecb818153 Cluster Name: FsxId0762660cbce3713bf Cluster Serial Number: 1-80-000011 Cluster Location: Cluster Contact: ::> network interface show -service-policy default-intercluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- FsxId0762660cbce3713bf inter_1 up/up 10.0.8.198/24 FsxId0762660cbce3713bf-01 e0e true inter_2 up/up 10.0.8.77/24 FsxId0762660cbce3713bf-02 e0e true 2 entries were displayed.
::> cluster identity show Cluster UUID: 5b94cdf2-8501-11ee-adbe-13bc02fe3110 Cluster Name: FsxId0648fddba7bd041af Cluster Serial Number: 1-80-000011 Cluster Location: Cluster Contact: ::> network interface show -service-policy default-intercluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- FsxId0648fddba7bd041af inter_1 up/up 10.0.8.86/24 FsxId0648fddba7bd041af-01 e0e true inter_2 up/up 10.0.8.108/24 FsxId0648fddba7bd041af-02 e0e true 2 entries were displayed.
FSxN 2からクラスターピアリングを作成します。
::> cluster peer create -peer-addrs 10.0.8.198 10.0.8.77 Notice: Use a generated passphrase or choose a passphrase of 8 or more characters. To ensure the authenticity of the peering relationship, use a phrase or sequence of characters that would be hard to guess. Enter the passphrase: Confirm the passphrase: Notice: Now use the same passphrase in the "cluster peer create" command in the other cluster.
FSxN 1からもクラスターピアリングを作成します。
::> cluster peer show This table is currently empty. ::> cluster peer create -peer-addrs 10.0.8.86 10.0.8.108 Notice: Use a generated passphrase or choose a passphrase of 8 or more characters. To ensure the authenticity of the peering relationship, use a phrase or sequence of characters that would be hard to guess. Enter the passphrase: Confirm the passphrase: ::> cluster peer show Peer Cluster Name Cluster Serial Number Availability Authentication ------------------------- --------------------- -------------- -------------- FsxId0648fddba7bd041af 1-80-000011 Available ok
クラスターピアリングできましたね。
続いて、SVMピアリングを行います。
FSxN 1からSVMピアリングを作成します。
::> vserver peer create -vserver svm -peer-vserver svm2 -applications snapmirror -peer-cluster FsxId0648fddba7bd041af Info: [Job 46] 'vserver peer create' job queued ::> vserver peer show-all Peer Peer Peering Remote Vserver Vserver State Peer Cluster Applications Vserver ----------- ----------- ------------ ----------------- -------------- --------- svm svm2 initializing FsxId0648fddba7bd041af
FSxN 2側でSVMピアリングを承認します。
::> vserver peer show-all Peer Peer Peering Remote Vserver Vserver State Peer Cluster Applications Vserver ----------- ----------- ------------ ----------------- -------------- --------- svm2 svm pending FsxId0762660cbce3713bf snapmirror svm ::> vserver peer accept -vserver svm2 -peer-vserver svm Info: [Job 45] 'vserver peer accept' job queued ::> vserver peer show-all Peer Peer Peering Remote Vserver Vserver State Peer Cluster Applications Vserver ----------- ----------- ------------ ----------------- -------------- --------- svm2 svm peered FsxId0762660cbce3713bf snapmirror svm
下準備ができたのでSnapMirrorの設定を行います。
FSxN 2側からSnapMirror relationshipを作成します。
::> snapmirror show This table is currently empty. ::> snapmirror protect -path-list svm:vol1 -destination-vserver svm2 -policy MirrorAllSnapshots -auto-initialize false -support-tiering true -tiering-policy none [Job 46] Job is queued: snapmirror protect for list of source endpoints beginning with "svm:vol1". ::> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Uninitialized Idle - true - ::> snapmirror show -instance Source Path: svm:vol1 Destination Path: svm2:vol1_dst Relationship Type: XDP Relationship Group Type: none SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Mirror State: Uninitialized Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Percent Complete for Current Status: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: - Newest Snapshot Timestamp: - Exported Snapshot: - Exported Snapshot Timestamp: - Healthy: true Unhealthy Reason: - Destination Volume Node: FsxId0648fddba7bd041af-01 Relationship ID: 759fa7eb-8530-11ee-adbe-13bc02fe3110 Current Operation ID: - Transfer Type: - Transfer Error: - Current Throttle: - Current Transfer Priority: - Last Transfer Type: - Last Transfer Error: - Last Transfer Size: - Last Transfer Network Compression Ratio: - Last Transfer Duration: - Last Transfer From: - Last Transfer End Timestamp: - Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: - Identity Preserve Vserver DR: - Volume MSIDs Preserved: - Is Auto Expand Enabled: - Number of Successful Updates: 0 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 0 Total Transfer Time in Seconds: 0 FabricLink Source Role: - FabricLink Source Bucket: - FabricLink Peer Role: - FabricLink Peer Bucket: - FabricLink Topology: - FabricLink Pull Byte Count: - FabricLink Push Byte Count: - FabricLink Pending Work Count: - FabricLink Status: -
SnapMirror relationshipが作成され、SnapMirrorの転送先ボリュームとしてvol1_dst
が作成されました。
SnapMirrorの転送先ボリュームの確認
snapmirror protectで作成されたSMirrorの転送先ボリュームの確認をします。
::> volume show -volume vol1_dst Vserver Volume Aggregate State Type Size Available Used% --------- ------------ ------------ ---------- ---- ---------- ---------- ----- svm2 vol1_dst aggr1 online DP 32.71GB 31.07GB 0% ::> volume show -volume vol1_dst -instance Vserver Name: svm2 Volume Name: vol1_dst Aggregate Name: aggr1 List of Aggregates for FlexGroup Constituents: aggr1 Encryption Type: none List of Nodes Hosting the Volume: FsxId0648fddba7bd041af-01 Volume Size: 32.71GB Volume Data Set ID: 1027 Volume Master Data Set ID: 2157478065 Volume State: online Volume Style: flex Extended Volume Style: flexvol FlexCache Endpoint Type: none Is Cluster-Mode Volume: true Is Constituent Volume: false Number of Constituent Volumes: - Export Policy: default User ID: - Group ID: - Security Style: - UNIX Permissions: ------------ Junction Path: - Junction Path Source: - Junction Active: - Junction Parent Volume: - Comment: Available Size: 31.07GB Filesystem Size: 32.71GB Total User-Visible Size: 31.07GB Used Size: 268KB Used Percentage: 0% Volume Nearly Full Threshold Percent: 95% Volume Full Threshold Percent: 98% Maximum Autosize: 100TB Minimum Autosize: 32.71GB Autosize Grow Threshold Percentage: 90% Autosize Shrink Threshold Percentage: 85% Autosize Mode: grow_shrink Total Files (for user-visible data): 1018235 Files Used (for user-visible data): 96 Space Guarantee in Effect: true Space SLO in Effect: true Space SLO: none Space Guarantee Style: none Fractional Reserve: 0% Volume Type: DP Snapshot Directory Access Enabled: true Space Reserved for Snapshot Copies: 5% Snapshot Reserve Used: 0% Snapshot Policy: none Creation Time: Fri Nov 17 10:02:46 2023 Language: C.UTF-8 Clone Volume: false Node name: FsxId0648fddba7bd041af-01 Clone Parent Vserver Name: - FlexClone Parent Volume: - NVFAIL Option: off Volume's NVFAIL State: false Force NVFAIL on MetroCluster Switchover: off Is File System Size Fixed: false (DEPRECATED)-Extent Option: off Reserved Space for Overwrites: 0B Primary Space Management Strategy: volume_grow Read Reallocation Option: off Naming Scheme for Automatic Snapshot Copies: create_time Inconsistency in the File System: false Is Volume Quiesced (On-Disk): false Is Volume Quiesced (In-Memory): false Volume Contains Shared or Compressed Data: false Space Saved by Storage Efficiency: 0B Percentage Saved by Storage Efficiency: 0% Space Saved by Deduplication Along With VBN ZERO Savings: 0B Percentage Saved by Deduplication: 0% Unique Data Which Got Shared by Deduplication: 0B Space Saved by Compression: 0B Percentage Space Saved by Compression: 0% Volume Size Used by Snapshot Copies: 0B Block Type: 64-bit Is Volume Moving: false Flash Pool Caching Eligibility: read-write Flash Pool Write Caching Ineligibility Reason: - Constituent Volume Role: - QoS Policy Group Name: - QoS Adaptive Policy Group Name: - Caching Policy Name: - Is Volume Move in Cutover Phase: false Number of Snapshot Copies in the Volume: 0 VBN_BAD may be present in the active filesystem: false Is Volume on a hybrid aggregate: false Total Physical Used Size: 268KB Physical Used Percentage: 0% FlexGroup Name: - Is Volume a FlexGroup: false SnapLock Type: non-snaplock Vserver DR Protection: - Enable or Disable Encryption: false Is Volume Encrypted: false Encryption State: none Encryption Key ID: Encryption Key Creation Time: - Application: - Is Fenced for Protocol Access: false Protocol Access Fence Owner: - Is SIDL enabled: off Over Provisioned Size: 0B Available Snapshot Reserve Size: 1.63GB Logical Used Size: 268KB Logical Used Percentage: 0% Logical Available Size: - Logical Size Used by Active Filesystem: 268KB Logical Size Used by All Snapshots: 0B Logical Space Reporting: false Logical Space Enforcement: false Volume Tiering Policy: none Performance Tier Inactive User Data: 0B Performance Tier Inactive User Data Percent: 0% Tags to be Associated with Objects Stored on a FabricPool: - Does the Object Tagging Scanner Need to Run on This Volume: false Is File System Analytics Supported: false Reason File System Analytics is not Supported: File system analytics is not supported on SnapMirror destination volumes. File System Analytics State: off File System Analytics Scan Progress: - Activity Tracking State: off Is Activity Tracking Supported: false Reason Activity Tracking Is Not Supported: Volume activity tracking is not supported on SnapMirror destination volumes. Is SMBC Master: false Is SMBC Failover Capable: false SMBC Consensus: - Anti-ransomware State: disabled Granular data: disabled Enable Snapshot Copy Locking: false Expiry Time: - ComplianceClock Time: - Are Large Size Volumes and Files Enabled: false
SnapMirrorの転送先であるためタイプがDP
ということぐらいしか特筆事項はありません。
aggregateやボリュームのTSSEの情報も確認しておきます。
::> set diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y ::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 860.6GB 0% online 3 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 1.13GB 0% Aggregate Metadata 60.42MB 0% Snapshot Reserve 45.36GB 5% Total Used 46.54GB 5% Total Physical Used 753.7MB 0% Total Provisioned Space 34.71GB 4% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst Vserver : svm2 Volume : vol1_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 268KB 0% Footprint in Performance Tier 672KB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 107.5MB 0% Delayed Frees 404KB 0% File Operation Metadata 4KB 0% Total Footprint 108.1MB 0% Effective Total Footprint 108.1MB 0% ::*> volume show-footprint -volume vol1_dst -instance Vserver: svm2 Volume Name: vol1_dst Volume MSID: 2157478065 Volume DSID: 1027 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: - Deduplication Footprint Percent: - Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 268KB Volume Data Footprint Percent: 0% Flexible Volume Metadata Footprint: 107.5MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 404KB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 108.1MB Total Footprint Percent: 0% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 672KB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: - Total Deduplication Footprint Percent: - Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 108.1MB Effective Total after Footprint Data Reduction Percent: 0% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 1.00:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 1.00:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 1.00:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 3.11MB Total Physical Used: 61.81MB Total Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used Without Snapshots: 548KB Total Data Reduction Physical Used Without Snapshots: 60.98MB Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones: 548KB Total Data Reduction Physical Used without snapshots and flexclones: 60.98MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 3.77MB Total Physical Used in FabricPool Performance Tier: 62.81MB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.20MB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 61.97MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 548KB Physical Space Used for All Volumes: 548KB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 61.81MB Physical Space Used by the Aggregate: 61.81MB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 2.58MB Physical Size Used by Snapshot Copies: 856KB Snapshot Volume Data Reduction Ratio: 3.08:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.08:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 2 Number of SIS Change Log Disabled Volumes: 1 ::*> volume efficiency show -volume vol1_dst -instance Vserver Name: svm2 Volume Name: vol1_dst Volume Path: /vol/vol1_dst State: Disabled Auto State: - Status: Idle Progress: Idle for 00:00:00 Type: Regular Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: 0 Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: - Last Success Operation End: - Last Operation Begin: - Last Operation End: Fri Nov 17 10:09:20 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 0B Logical Data Limit: 4KB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: - Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 0 Duplicate Blocks Found: 0 Sorting Begin: - Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: - Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: false Application IO Size: - Compression Type: - Storage Efficiency Mode: - Verify Trigger Rate: 1 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 0 Same FP Count: 0 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: - Cross Volume Background Deduplication: false Extended Compressed Data: false Volume has auto adaptive compression savings: false Volume doing auto adaptive compression: false auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: false ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance There are no entries matching your query. ::*> volume efficiency inactive-data-compression show Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm2 vol1_2 false - IDLE SUCCESS lzopro ::*> volume show -volume vol1_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ----- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol1_dst 32.71GB - 31.07GB 32.71GB 31.07GB 268KB 0% 0B 0% 0B 0% 0B 0B 0% 268KB 0% - 268KB 0B 0%
Compressionだけでなく、Inline DedupeやInline Compressionも無効になっていますね。
Storage Efficiency Modeも-
で、そもそもState: Disabled
となっています。
また、Inactive data compression の情報を確認することはできませんでした。
手動でTSSEを有効化できるか確認してみます。
::*> volume efficiency modify -volume vol1_dst -storage-efficiency-mode efficient -inline-dedupe true -data-compaction true Error: command failed: The "-storage-efficiency-mode" parameter is only supported on RW volumes.
DPボリュームでは-storage-efficiency-mode
をすることはできないようです。
以下KBにも転送先ボリュームでTSSEを有効にする必要がないと記載されているので、このまま検証を進めます。
ONTAP のリリースおよびプラットフォームでサポートされている場合、SnapMirrorデスティネーションボリュームでTSSEを有効にするための手順は必要ありません
温度影響を受けやすいStorage Efficiency機能はSnapMirrorにどのように影響しますか。 - NetApp
SnapMirror relationshipの初期化
SnapMirror relationshipの初期化を行います。
# SnapMirror relationshipの初期化 ::*> snapmirror initialize -destination-path svm2:vol1_dst -source-path svm:vol1 Operation is queued: snapmirror initialize of destination "svm2:vol1_dst". # SnapMirror relationshipの確認 ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Uninitialized Transferring 0B true 11/17 10:58:10 ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Uninitialized Transferring 1.85GB true 11/17 10:58:24 ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Uninitialized Transferring 2.68GB true 11/17 10:58:39 ::*> snapmirror show -instance Source Path: svm:vol1 Source Cluster: - Source Vserver: svm Source Volume: vol1 Destination Path: svm2:vol1_dst Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol1_dst Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): unlimited Mirror State: Uninitialized Relationship Status: Transferring File Restore File Count: - File Restore File List: - Transfer Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 Snapshot Progress: 3.40GB Total Progress: 3.40GB Network Compression Ratio: 1:1 Snapshot Checkpoint: 282.2KB Newest Snapshot: - Newest Snapshot Timestamp: - Exported Snapshot: - Exported Snapshot Timestamp: - Healthy: true Relationship ID: 759fa7eb-8530-11ee-adbe-13bc02fe3110 Source Vserver UUID: 0d9b83f3-8520-11ee-84de-4b7ecb818153 Destination Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Current Operation ID: 318a01d7-8538-11ee-adbe-13bc02fe3110 Transfer Type: initialize Transfer Error: - Last Transfer Type: - Last Transfer Error: - Last Transfer Error Codes: - Last Transfer Size: - Last Transfer Network Compression Ratio: - Last Transfer Duration: - Last Transfer From: - Last Transfer End Timestamp: - Unhealthy Reason: - Progress Last Updated: 11/17 10:59:10 Relationship Capability: 8.2 and above Lag Time: - Current Transfer Priority: normal SMTape Operation: - Destination Volume Node Name: FsxId0648fddba7bd041af-01 Identity Preserve Vserver DR: - Number of Successful Updates: 0 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 0 Total Transfer Time in Seconds: 0 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: - ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Uninitialized Transferring 20.99GB true 11/17 11:02:17 ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Uninitialized Transferring 29.70GB true 11/17 11:03:35 ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - ::*> snapmirror show -instance Source Path: svm:vol1 Source Cluster: - Source Vserver: svm Source Volume: vol1 Destination Path: svm2:vol1_dst Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol1_dst Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 Newest Snapshot Timestamp: 11/17 10:58:10 Exported Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 Exported Snapshot Timestamp: 11/17 10:58:10 Healthy: true Relationship ID: 759fa7eb-8530-11ee-adbe-13bc02fe3110 Source Vserver UUID: 0d9b83f3-8520-11ee-84de-4b7ecb818153 Destination Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: update Last Transfer Error: - Last Transfer Error Codes: - Last Transfer Size: 0B Last Transfer Network Compression Ratio: 1:1 Last Transfer Duration: 0:0:0 Last Transfer From: svm:vol1 Last Transfer End Timestamp: 11/17 11:04:01 Unhealthy Reason: - Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 0:6:37 Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0648fddba7bd041af-01 Identity Preserve Vserver DR: - Number of Successful Updates: 1 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 32421956176 Total Transfer Time in Seconds: 351 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
6分30秒ほどで完了しました。
SnapMirrornの初期転送後のaggregateやボリュームの情報を確認してみます。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 830.3GB 4% online 3 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 31.33GB 3% Aggregate Metadata 88.12MB 0% Snapshot Reserve 45.36GB 5% Total Used 76.78GB 8% Total Physical Used 31.10GB 3% Total Provisioned Space 37.54GB 4% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst Vserver : svm2 Volume : vol1_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 30.01GB 3% Footprint in Performance Tier 30.11GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 202.2MB 0% Delayed Frees 107.8MB 0% File Operation Metadata 4KB 0% Total Footprint 30.31GB 3% Effective Total Footprint 30.31GB 3% ::*> volume show-footprint -volume vol1_dst -instance Vserver: svm2 Volume Name: vol1_dst Volume MSID: 2157478065 Volume DSID: 1027 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: - Deduplication Footprint Percent: - Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 30.01GB Volume Data Footprint Percent: 3% Flexible Volume Metadata Footprint: 202.2MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 107.8MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 30.31GB Total Footprint Percent: 3% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 30.11GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: - Total Deduplication Footprint Percent: - Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 30.31GB Effective Total after Footprint Data Reduction Percent: 3% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 3.35:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 1.69:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 1.69:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 100.9GB Total Physical Used: 30.09GB Total Storage Efficiency Ratio: 3.35:1 Total Data Reduction Logical Used Without Snapshots: 50.34GB Total Data Reduction Physical Used Without Snapshots: 29.76GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.69:1 Total Data Reduction Logical Used without snapshots and flexclones: 50.34GB Total Data Reduction Physical Used without snapshots and flexclones: 29.76GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.69:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 100.9GB Total Physical Used in FabricPool Performance Tier: 30.15GB Total FabricPool Performance Tier Storage Efficiency Ratio: 3.35:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 50.34GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 29.83GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.69:1 Logical Space Used for All Volumes: 50.34GB Physical Space Used for All Volumes: 29.68GB Space Saved by Volume Deduplication: 20.66GB Space Saved by Volume Deduplication and pattern detection: 20.66GB Volume Deduplication Savings ratio: 1.70:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.70:1 Logical Space Used by the Aggregate: 30.09GB Physical Space Used by the Aggregate: 30.09GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 50.55GB Physical Size Used by Snapshot Copies: 337.0MB Snapshot Volume Data Reduction Ratio: 153.60:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 153.60:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> volume efficiency show -volume vol1_dst -instance Vserver Name: svm2 Volume Name: vol1_dst Volume Path: /vol/vol1_dst State: Disabled Auto State: - Status: Idle Progress: Idle for 00:00:00 Type: Regular Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: 0 Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: - Last Success Operation End: - Last Operation Begin: - Last Operation End: Fri Nov 17 11:07:37 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 0B Logical Data Limit: 4KB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: - Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 0 Duplicate Blocks Found: 0 Sorting Begin: - Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: - Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 1 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 0 Same FP Count: 0 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume show -volume vol1_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol1_dst 35.54GB 0 4.08GB 35.54GB 33.76GB 29.68GB 87% 20.66GB 41% 20.66GB 41% 29.34GB 0B 0% 50.22GB 149% - 50.22GB 0B 0%
要点は以下のとおりです。
- 転送元ボリュームの重複排除や圧縮の内容が維持されている
- 転送先ボリュームのStorage Efficiency Modeが
efficient
になった- ただし、ステータスはdisableのまま
- Inactive data compressionも有効になった
確かにTSSEを転送先ボリュームで有効化しなくとも重複排除や圧縮の内容が維持されています。
転送先ボリュームで手動でTSSEを実行
次に転送先ボリュームで手動でTSSEを実行します。
# 圧縮、重複排除、コンパクションの実行と対象を共有ブロック、Snapshotに含めて実行 ::*> volume efficiency start -volume vol1_dst -scan-old-data -compression -dedupe -compaction -shared-blocks -snapshot-blocks Warning: This operation scans all of the data in volume "vol1_dst" of Vserver "svm2". It might take a significant time, and degrade performance during that time. Use of "-shared-blocks|-a" option can increase space usage as shared blocks will be compressed. Use of "-snapshot-blocks|-b" option can increase space usage as data which is part of Snapshot will be compressed. Do you want to continue? {y|n}: y Error: command failed: Failed to start efficiency on volume "vol1_dst" of Vserver "svm2": Operation is not enabled.
Storage Efficiencyがdisableだからか失敗しました。
enableに変更した上で行います。
# Storage Efficiencyをenableに変更 ::*> volume efficiency on -volume vol1_dst Efficiency for volume "vol1_dst" of Vserver "svm2" is enabled. # インライン圧縮、ポストプロセス圧縮、インライン重複排除、コンパクションも有効にして実行 ::*> volume efficiency modify -volume vol1_dst -inline-compression true -compression true -inline-dedupe true -data-compaction true Error: command failed: Failed to modify efficiency configuration for volume "vol1_dst" of Vserver "svm2": Inline deduplication is not supported on a data protection secondary volume.
インライン重複排除を指定した状態では実行できませんでした。
-scan-old-data
のみを指定して実行します。
::*> volume efficiency start -volume vol1_dst -scan-old-data Warning: This operation scans all of the data in volume "vol1_dst" of Vserver "svm2". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol1_dst" of Vserver "svm2" has started. ::*> volume efficiency show -volume vol1_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol1_dst Enabled Active 3407872 KB Scanned - ::*> volume efficiency show -volume vol1_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol1_dst Enabled Active 18082144 KB Searched - # 実行状態の確認 ::*> volume efficiency show -volume vol1_dst -instance Vserver Name: svm2 Volume Name: vol1_dst Volume Path: /vol/vol1_dst State: Enabled Auto State: - Status: Active Progress: 0 KB (0%) Done Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Sat Nov 18 01:35:06 2023 Last Success Operation End: Sat Nov 18 01:35:06 2023 Last Operation Begin: Sat Nov 18 01:35:06 2023 Last Operation End: Sat Nov 18 01:35:06 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 50.22GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Saving Checkpoint Time: Sat Nov 18 01:46:08 UTC 2023 Checkpoint Operation Type: Scan Checkpoint Stage: Saving_pass2 Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 7690893 Blocks Processed For Compression: 0 Gathering Begin: Sat Nov 18 01:43:42 UTC 2023 Gathering Phase 2 Begin: Sat Nov 18 01:45:55 UTC 2023 Fingerprints Sorted: 7690893 Duplicate Blocks Found: 6380173 Sorting Begin: Sat Nov 18 01:45:55 UTC 2023 Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: Sat Nov 18 01:46:08 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7690893 Same FP Count: 6380173 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency show -volume vol1_dst -instance Vserver Name: svm2 Volume Name: vol1_dst Volume Path: /vol/vol1_dst State: Enabled Auto State: - Status: Active Progress: 1032512 KB (4%) Done Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Sat Nov 18 01:35:06 2023 Last Success Operation End: Sat Nov 18 01:35:06 2023 Last Operation Begin: Sat Nov 18 01:35:06 2023 Last Operation End: Sat Nov 18 01:35:06 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 50.23GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Saving Checkpoint Time: Sat Nov 18 01:46:17 UTC 2023 Checkpoint Operation Type: Scan Checkpoint Stage: Saving_sharing Checkpoint Substage: - Checkpoint Progress: 0 KB (0%) Done Fingerprints Gathered: 7690893 Blocks Processed For Compression: 0 Gathering Begin: Sat Nov 18 01:43:42 UTC 2023 Gathering Phase 2 Begin: Sat Nov 18 01:45:55 UTC 2023 Fingerprints Sorted: 7690893 Duplicate Blocks Found: 6380173 Sorting Begin: Sat Nov 18 01:45:55 UTC 2023 Blocks Deduplicated: 257761 Blocks Snapshot Crunched: 0 De-duping Begin: Sat Nov 18 01:46:08 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7690893 Same FP Count: 6380173 Same FBN: 0 Same Data: 257761 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency show -volume vol1_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol1_dst Enabled Active 19013220 KB (74%) Done - ::*> volume efficiency show -volume vol1_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol1_dst Enabled Active 22015808 KB (86%) Done - ::*> volume efficiency show -volume vol1_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol1_dst Enabled Idle Idle for 00:00:29 -
正常に実行できました。
aggregateやボリュームの情報を確認してみます。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 829.8GB 4% online 3 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 31.68GB 3% Aggregate Metadata 246.0MB 0% Snapshot Reserve 45.36GB 5% Total Used 77.28GB 9% Total Physical Used 33.39GB 4% Total Provisioned Space 35.80GB 4% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst Vserver : svm2 Volume : vol1_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 30.17GB 3% Footprint in Performance Tier 30.43GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 202.2MB 0% Deduplication Metadata 30.07MB 0% Deduplication 30.07MB 0% Delayed Frees 268.4MB 0% File Operation Metadata 4KB 0% Total Footprint 30.66GB 3% Effective Total Footprint 30.66GB 3% ::*> volume show-footprint -volume vol1_dst -instance Vserver: svm2 Volume Name: vol1_dst Volume MSID: 2157478065 Volume DSID: 1027 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 30.07MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 30.17GB Volume Data Footprint Percent: 3% Flexible Volume Metadata Footprint: 202.2MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 268.4MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 30.66GB Total Footprint Percent: 3% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 30.43GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 30.07MB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 30.66GB Effective Total after Footprint Data Reduction Percent: 3% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 3.32:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 3.37:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 3.37:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 100.8GB Total Physical Used: 30.38GB Total Storage Efficiency Ratio: 3.32:1 Total Data Reduction Logical Used Without Snapshots: 50.23GB Total Data Reduction Physical Used Without Snapshots: 14.90GB Total Data Reduction Efficiency Ratio Without Snapshots: 3.37:1 Total Data Reduction Logical Used without snapshots and flexclones: 50.23GB Total Data Reduction Physical Used without snapshots and flexclones: 14.90GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 3.37:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 100.8GB Total Physical Used in FabricPool Performance Tier: 30.50GB Total FabricPool Performance Tier Storage Efficiency Ratio: 3.31:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 50.26GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 15.02GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.35:1 Logical Space Used for All Volumes: 50.23GB Physical Space Used for All Volumes: 14.66GB Space Saved by Volume Deduplication: 35.57GB Space Saved by Volume Deduplication and pattern detection: 35.57GB Volume Deduplication Savings ratio: 3.43:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 3.43:1 Logical Space Used by the Aggregate: 30.38GB Physical Space Used by the Aggregate: 30.38GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 50.56GB Physical Size Used by Snapshot Copies: 15.48GB Snapshot Volume Data Reduction Ratio: 3.27:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.27:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -volume vol1_dst -instance Vserver Name: svm2 Volume Name: vol1_dst Volume Path: /vol/vol1_dst State: Enabled Auto State: - Status: Idle Progress: Idle for 00:02:49 Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Sat Nov 18 01:43:42 2023 Last Success Operation End: Sat Nov 18 01:52:03 2023 Last Operation Begin: Sat Nov 18 01:43:42 2023 Last Operation End: Sat Nov 18 01:52:03 2023 Last Operation Size: 29.34GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 50.26GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 7690893 Blocks Processed For Compression: 0 Gathering Begin: Sat Nov 18 01:43:42 UTC 2023 Gathering Phase 2 Begin: Sat Nov 18 01:45:55 UTC 2023 Fingerprints Sorted: 7690893 Duplicate Blocks Found: 6380173 Sorting Begin: Sat Nov 18 01:45:55 UTC 2023 Blocks Deduplicated: 6380173 Blocks Snapshot Crunched: 0 De-duping Begin: Sat Nov 18 01:46:08 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7690893 Same FP Count: 6380173 Same FBN: 0 Same Data: 6380173 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume show -volume vol1_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol1_dst 33.80GB 0 3.63GB 33.80GB 32.11GB 28.48GB 88% 35.57GB 56% 35.57GB 56% 12.17GB 0B 0% 64.05GB 199% - 50.26GB 0B 0%
ポイントは以下になります。
Physical Space Used for All Volumes
が29.68GBから14.66GBと大幅に減っている- ただし、
Physical Space Used by the Aggregate
は30GBでほとんど変わっていない - 代わりに
Physical Size Used by Snapshot Copies
が337.0MBから15.48GBと増えている - Inactive data compression は連動して動作しない
これは、重複排除によりアクティブファイルシステム(AFS)上では参照するデータブロック数が少なくなったが、Snapshotでは変わらず重複データ分のデータブロックを保持していることによるものだと考えます。
Snapshotの情報も確認してみるとサイズは15.48GBでした。
::*> snapshot show -volume vol1_dst ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol1_dst snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 15.48GB 46% 51% ::*> snapshot show -volume vol1_dst -instance Vserver: svm2 Volume: vol1_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 Snapshot Data Set ID: 4294968323 Snapshot Master Data Set ID: 6452445361 Creation Time: Fri Nov 17 10:58:10 2023 Snapshot Busy: true List of Owners: snapmirror Snapshot Size: 15.48GB Percentage of Total Blocks: 46% Percentage of Used Blocks: 51% Consistency Point Count: 106 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 1 Logical Snap ID: 1 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror, SMDeleteMe=snapmirror Instance UUID: c1227bc5-70df-4d9b-8e96-e27e12bbe28c Version UUID: ec783025-b254-405d-a435-8dc3102ba291 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 30.00GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 20.54GB VBN Zero Savings from Snapshot: 0B Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 50.55GB Performance Metadata from Snapshot: 1.30MB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false
SnapshotのサイズはおおよそAFSやSnapshotとの差分量で考えて良いと思います。今回は他のSnapshotは存在していないため、15GBは手動によるTSSEによって重複排除された影響によるAFSとの差分量であると考えられます。
ただし、Snapshotでは重複排除によって削減されたデータも保持しているため、物理的なデータ量は変わっていません。
こちらの挙動は以下NetAppのKBにも記載されています。
いいえ。 ONTAP では、アクティブなファイルシステム( AFS )にあるブロックにのみ効率化機能が適用されます。
TR(Technical Report)にも「Snapshot内のブロックはロックされて重複排除できない」、「スペースの節約をしたい場合はSnapMirrorを実行する前に重複排除をするべき」と記載されています。
SnapMirror creates a Snapshot copy before performing an update transfer. Any blocks in the Snapshot copy are locked and cannot be deduplicated. Therefore, if maximum space savings from deduplication are required, run the dedupe process before performing SnapMirror updates.
TR-4015: SnapMirror Configuration and Best Practices Guide for ONTAP 9.11.1
SnapMirrorの転送元がTiering Policy Noneであれば、事前にTSSEを効かせた上で転送したいですね。
SnapMirrorの転送先ボリュームでInactive data compression の動作確認
TSSEによるポストプロセス重複排除はSnapMirrorの転送先ボリュームで動作することを確認できたので、次はInactive data compression の動作確認をします。
volume efficiency inactive-data-compression modifyで1日経過したデータをCold Dataとして判定し、圧縮するように設定します。
::*> volume efficiency inactive-data-compression modify -volume vol1_dst -threshold-days 1 -threshold-days-min 1 -threshold-days-max 1
この状態で一日ほど待ってみます。
::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 1561 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
Time since Last Inactive Data Compression Scan ended(sec)
が更新されていることから、気づかないうちに実行されていました。
ただし、Number of Compression Done Blocks
が0であることから特に圧縮はされていなさそうです。そもそも、Number of Cold Blocks Encountered
も0であることからColds Dataと判定されたブロックが存在しない。
EMSイベントも確認しましたが、Inactive data compressionの形跡はありませんでした。
::*> event log show -severity <=DEBUG Time Node Severity Event ------------------- ---------------- ------------- --------------------------- 11/17/2023 04:35:59 FsxId0648fddba7bd041af-02 INFORMATIONAL Nblade.nfsCredCacheFlushed: When the administrator modifies the "extended-groups-limit" option or "auth-sys-extended-groups" option using the "vserver nfs modify" command, the entire credential cache is flushed that holds credentials on connections that use mixed-mode security style volumes or RPCSEC_GSS authentication or extended groups over AUTH_SYS. This makes subsequent operations on such connections slower for a short while, until the credential cache is repopulated. The value of "auth-sys-extended-groups" option is 0 (1:enabled, 0:disabled). The value of "extended-groups-limit" option is 32. 11/17/2023 04:35:47 FsxId0648fddba7bd041af-02 NOTICE arw.vserver.state: Anti-ransomware was changed to "disabled" on Vserver "svm2" (UUID: "c5f8f20e-8502-11ee-adbe-13bc02fe3110"). 11/17/2023 04:33:06 FsxId0648fddba7bd041af-02 NOTICE snaplock.sys.compclock.set: The compliance clock time of the system has been set to Fri Nov 17 04:33:06 UTC 2023. Reason: initialized by administrator. 3 entries were displayed.
手動で実行してみます。
::*> volume efficiency inactive-data-compression start -volume vol1_dst Inactive data compression scan started on volume "vol1_dst" in Vserver "svm2" ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 2771 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 431032 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 99248 Time since Last Inactive Data Compression Scan ended(sec): 99239 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 99239 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 22664 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 3118568 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 99300 Time since Last Inactive Data Compression Scan ended(sec): 99291 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 99291 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 28% Phase1 L1s Processed: 51400 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 10351040 Phase2 Blocks Processed: 2910738 Number of Cold Blocks Encountered: 4523344 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 115 Time since Last Inactive Data Compression Scan ended(sec): 26 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 26 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 66% Phase1 L1s Processed: 51400 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 10351040 Phase2 Blocks Processed: 6873740 Number of Cold Blocks Encountered: 4523344 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 165 Time since Last Inactive Data Compression Scan ended(sec): 75 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 75 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 8432464 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 110 Time since Last Inactive Data Compression Scan ended(sec): 11 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 11 Average time for Cold Data Compression(sec): 31 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 97%
Number of Cold Blocks Encountered
は8432464であるため、Cold Dataとして判定されたデータブロックは8,432,464 × 4KiB / 1,024 / 1,024 ≒ 32.16GiB
分判定されていそうです。(ONTAPのデータブロックは4KiB単位)
これは物理データ量のサイズとおおよそ一致しているので、ほぼ全てのデータをCold Dataとして判定したと考えます。
ただし、Number of Compression Done Blocks
が0であることから圧縮はできていません。また、Incompressible Data Percentage
を確認すると97%でした。
Inactive data compressionは32KB単位で圧縮します。通常のSecondary Compressionも32KB単位で圧縮しますが、圧縮率が25%以上なければ実際の圧縮は行いません。
重複排除がかかっている状態で転送されたのかが良くないのか、それともテストファイルが/dev/urandomで生成したファイルであるのが良くないのかどちらかが原因なような
再トライ (1回目)
テスト用ファイルの準備
再トライします。
先の検証では、重複排除がかかっている状態で転送されたためInactive data compressionによるデータ圧縮が上手くいかなかったと仮定して、今回は重複排除を効かせないようにして転送をします。
そのためファイルはコピーする際は、前にコピーしたファイルのデータがキャパシティプールストレージに流し切られてから行います。
適当に20個ほどコピーしました。
# テスト用ファイルのコピー $ sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_11 $ sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_12 $ sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_13 $ sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_14 $ sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_15 $ sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_16 $ sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_17 $ sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_18 $ sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_19 $ sudo cp -p /mnt/fsxn/vol1/test_file_1 /mnt/fsxn/vol1/test_file_20 # コピーがされたことを確認 $ ls -l /mnt/fsxn/vol1 total 105270640 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_1 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_10 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_11 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_12 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_13 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_14 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_15 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_16 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_17 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_18 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_19 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_2 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_20 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_3 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_4 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_5 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_6 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_7 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_8 -rw-r--r--. 1 root root 5368709120 Nov 17 08:13 test_file_9 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-04855fdf5ed7737a8.fs-0762660cbce3713bf.fsx.us-east-1.amazonaws.com:/vol1 nfs4 122G 81G 42G 67% /mnt/fsxn/vol1
テストファイルコピー後のaggregateやボリュームの情報を確認します。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 857.7GB 0% online 3 FsxId0762660cbce raid0, 3713bf-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 3.29GB 0% Aggregate Metadata 801.4MB 0% Snapshot Reserve 45.36GB 5% Total Used 49.43GB 5% Total Physical Used 13.98GB 2% Total Provisioned Space 145GB 16% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 80.03GB - Logical Referenced Capacity 79.65GB - Logical Unreferenced Capacity 393.5MB - Total Physical Used 80.03GB - 2 entries were displayed. ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 80.44GB 9% Footprint in Performance Tier 1.64GB 2% Footprint in FSxFabricpoolObjectStore 79.34GB 98% Volume Guarantee 0B 0% Flexible Volume Metadata 537.3MB 0% Deduplication Metadata 30.07MB 0% Deduplication 30.07MB 0% Delayed Frees 542.6MB 0% File Operation Metadata 4KB 0% Total Footprint 81.53GB 9% Effective Total Footprint 81.53GB 9% ::*> volume show-footprint -volume vol1 -instance Vserver: svm Volume Name: vol1 Volume MSID: 2163879579 Volume DSID: 1026 Vserver UUID: 0d9b83f3-8520-11ee-84de-4b7ecb818153 Aggregate Name: aggr1 Aggregate UUID: 44857d47-851f-11ee-84de-4b7ecb818153 Hostname: FsxId0762660cbce3713bf-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 30.07MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 80.44GB Volume Data Footprint Percent: 9% Flexible Volume Metadata Footprint: 537.3MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 542.6MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 81.53GB Total Footprint Percent: 9% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 1.64GB Volume Footprint bin0 Percent: 2% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 79.34GB Volume Footprint bin1 Percent: 98% Total Deduplication Footprint: 30.07MB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 81.53GB Effective Total after Footprint Data Reduction Percent: 9% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0762660cbce3713bf-01 Total Storage Efficiency Ratio: 1.83:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 1.21:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 1.21:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0762660cbce3713bf-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 150.5GB Total Physical Used: 82.35GB Total Storage Efficiency Ratio: 1.83:1 Total Data Reduction Logical Used Without Snapshots: 100.0GB Total Data Reduction Physical Used Without Snapshots: 82.35GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.21:1 Total Data Reduction Logical Used without snapshots and flexclones: 100.0GB Total Data Reduction Physical Used without snapshots and flexclones: 82.35GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.21:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 3.09GB Total Physical Used in FabricPool Performance Tier: 4.12GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 2.05GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.12GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 100.0GB Physical Space Used for All Volumes: 79.26GB Space Saved by Volume Deduplication: 20.79GB Space Saved by Volume Deduplication and pattern detection: 20.79GB Volume Deduplication Savings ratio: 1.26:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.26:1 Logical Space Used by the Aggregate: 82.35GB Physical Space Used by the Aggregate: 82.35GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 50.42GB Physical Size Used by Snapshot Copies: 1.68MB Snapshot Volume Data Reduction Ratio: 30811.64:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 30811.64:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -volume vol1 -instance Vserver Name: svm Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:00:49 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 13b698f2-8520-11ee-bfa4-f9928d2238a7 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Sun Nov 19 09:21:56 2023 Last Success Operation End: Sun Nov 19 09:35:00 2023 Last Operation Begin: Sun Nov 19 09:21:56 2023 Last Operation End: Sun Nov 19 09:35:00 2023 Last Operation Size: 13.41GB Last Operation Error: - Operation Frequency: Once approxmiately every 0 day(s) and 12 hour(s) Changelog Usage: 27% Changelog Size: 365.9MB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 101.2GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 3515704 Duplicate Blocks Found: 2204984 Sorting Begin: Sun Nov 19 09:22:01 UTC 2023 Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: Sun Nov 19 09:22:44 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 3515704 Same FP Count: 2204984 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 2204984 Stale Donor Count: 2204984 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: - Number of indirect blocks skipped by compression phase: - Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol1 -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: lzopro Failure Reason: CA tiering Policy is all Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 299 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume show -volume vol1 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ----- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 128GB 0 41.16GB 128GB 121.6GB 80.44GB 66% 20.79GB 21% 20.79GB 21% 9.56GB 0B 0% 101.2GB 83% - 101.2GB - -
5GiBのファイルを追加したため、ボリュームの使用量が50GiB分しっかり増えています。重複排除量も変わりありません。
SnapMirrorの再転送
それではSnapMirrorの再転送を行います。
# 現在のSnapMirror relastionshpの確認 ::*> snapshot show -volume vol1_dst ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol1_dst snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 15.48GB 46% 51% ::*> snapshot show -volume vol1_dst -instance Vserver: svm2 Volume: vol1_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 Snapshot Data Set ID: 4294968323 Snapshot Master Data Set ID: 6452445361 Creation Time: Fri Nov 17 10:58:10 2023 Snapshot Busy: true List of Owners: snapmirror Snapshot Size: 15.48GB Percentage of Total Blocks: 46% Percentage of Used Blocks: 51% Consistency Point Count: 106 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 1 Logical Snap ID: 1 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror, SMDeleteMe=snapmirror Instance UUID: c1227bc5-70df-4d9b-8e96-e27e12bbe28c Version UUID: ec783025-b254-405d-a435-8dc3102ba291 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 30.00GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 20.54GB VBN Zero Savings from Snapshot: 0B Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 50.55GB Performance Metadata from Snapshot: 1.30MB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false # SnapMirrorの再同期 ::*> snapmirror update -destination-path svm2:vol1_dst Operation is queued: snapmirror update of destination "svm2:vol1_dst". # SnapMirror relationshipのステータスの確認 ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Transferring 0B true 11/19 09:42:32 ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Finalizing 51.12GB true 11/19 09:49:54 ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - ::*> snapmirror show -instance Source Path: svm:vol1 Source Cluster: - Source Vserver: svm Source Volume: vol1 Destination Path: svm2:vol1_dst Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol1_dst Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-19_094232 Newest Snapshot Timestamp: 11/19 09:42:32 Exported Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-19_094232 Exported Snapshot Timestamp: 11/19 09:42:32 Healthy: true Relationship ID: 759fa7eb-8530-11ee-adbe-13bc02fe3110 Source Vserver UUID: 0d9b83f3-8520-11ee-84de-4b7ecb818153 Destination Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: update Last Transfer Error: - Last Transfer Error Codes: - Last Transfer Size: 51.12GB Last Transfer Network Compression Ratio: 1:1 Last Transfer Duration: 0:7:35 Last Transfer From: svm:vol1 Last Transfer End Timestamp: 11/19 09:50:07 Unhealthy Reason: - Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 0:7:58 Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0648fddba7bd041af-01 Identity Preserve Vserver DR: - Number of Successful Updates: 2 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 87312788192 Total Transfer Time in Seconds: 806 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
50GBほどのデータを8分弱で転送しました。
SnapMirrorで転送されたSnapshotの情報を確認すると、345MBだったり1GBだったりしています。もしかすると、何か裏側で処理が動いているかもしれません。
::*> snapshot show -volume vol1_dst ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol1_dst snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 15.48GB 17% 19% snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-19_094232 345.2MB 0% 1% 2 entries were displayed. ::*> snapshot show -volume vol1_dst -instance Vserver: svm2 Volume: vol1_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 Snapshot Data Set ID: 4294968323 Snapshot Master Data Set ID: 6452445361 Creation Time: Fri Nov 17 10:58:10 2023 Snapshot Busy: false List of Owners: - Snapshot Size: 15.48GB Percentage of Total Blocks: 17% Percentage of Used Blocks: 19% Consistency Point Count: 106 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 1 Logical Snap ID: 1 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror, SMDeleteMe=snapmirror Instance UUID: c1227bc5-70df-4d9b-8e96-e27e12bbe28c Version UUID: ec783025-b254-405d-a435-8dc3102ba291 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 30.00GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 20.54GB VBN Zero Savings from Snapshot: 0B Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 50.55GB Performance Metadata from Snapshot: 1.30MB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false Vserver: svm2 Volume: vol1_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-19_094232 Snapshot Data Set ID: 8589935619 Snapshot Master Data Set ID: 10747412657 Creation Time: Sun Nov 19 09:42:32 2023 Snapshot Busy: true List of Owners: snapmirror Snapshot Size: 1.03GB Percentage of Total Blocks: 1% Percentage of Used Blocks: 2% Consistency Point Count: 337 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 2 Logical Snap ID: 2 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror Instance UUID: ca5c33a6-f3db-4b1a-b897-1f7e5cda5a0f Version UUID: 1e3adb71-da0b-4e58-b7d9-e2d8a234a3b8 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 66.14GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 35.37GB VBN Zero Savings from Snapshot: 0B Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 101.5GB Performance Metadata from Snapshot: 1.02MB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false 2 entries were displayed.
SnapMirrorの再同期後のaggregateとボリュームの情報を確認します。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 775.8GB 10% online 3 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 85.64GB 9% Aggregate Metadata 310.2MB 0% Snapshot Reserve 45.36GB 5% Total Used 131.3GB 14% Total Physical Used 91.64GB 10% Total Provisioned Space 93.85GB 10% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst Vserver : svm2 Volume : vol1_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 82.70GB 9% Footprint in Performance Tier 83.12GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 490.8MB 0% Deduplication Metadata 1.01GB 0% Deduplication 30.07MB 0% Temporary Deduplication 1001MB 0% Delayed Frees 437.7MB 0% File Operation Metadata 4KB 0% Total Footprint 84.61GB 9% Effective Total Footprint 84.61GB 9% ::*> volume show-footprint -volume vol1_dst -instance Vserver: svm2 Volume Name: vol1_dst Volume MSID: 2157478065 Volume DSID: 1027 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 30.07MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: 1001MB Temporary Deduplication Footprint Percent: 0% Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 82.70GB Volume Data Footprint Percent: 9% Flexible Volume Metadata Footprint: 490.8MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 437.7MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 84.61GB Total Footprint Percent: 9% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 83.12GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 1.01GB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 84.61GB Effective Total after Footprint Data Reduction Percent: 9% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 3.06:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 2.22:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 2.22:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 253.2GB Total Physical Used: 82.82GB Total Storage Efficiency Ratio: 3.06:1 Total Data Reduction Logical Used Without Snapshots: 101.1GB Total Data Reduction Physical Used Without Snapshots: 45.56GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.22:1 Total Data Reduction Logical Used without snapshots and flexclones: 101.1GB Total Data Reduction Physical Used without snapshots and flexclones: 45.56GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.22:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 254.4GB Total Physical Used in FabricPool Performance Tier: 84.18GB Total FabricPool Performance Tier Storage Efficiency Ratio: 3.02:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 102.3GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 46.93GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.18:1 Logical Space Used for All Volumes: 101.1GB Physical Space Used for All Volumes: 44.28GB Space Saved by Volume Deduplication: 56.82GB Space Saved by Volume Deduplication and pattern detection: 56.82GB Volume Deduplication Savings ratio: 2.28:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 2.28:1 Logical Space Used by the Aggregate: 82.82GB Physical Space Used by the Aggregate: 82.82GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 152.1GB Physical Size Used by Snapshot Copies: 37.25GB Snapshot Volume Data Reduction Ratio: 4.08:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 4.08:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -volume vol1_dst -instance Vserver Name: svm2 Volume Name: vol1_dst Volume Path: /vol/vol1_dst State: Enabled Auto State: - Status: Active Progress: 22286364 KB (42%) Done Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Sat Nov 18 01:43:42 2023 Last Success Operation End: Sat Nov 18 01:52:03 2023 Last Operation Begin: Sat Nov 18 01:43:42 2023 Last Operation End: Sat Nov 18 01:52:03 2023 Last Operation Size: 29.34GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 700MB Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 102.3GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Saving Checkpoint Time: Sun Nov 19 09:51:08 UTC 2023 Checkpoint Operation Type: Start Checkpoint Stage: Saving_sharing Checkpoint Substage: - Checkpoint Progress: 0 KB (0%) Done Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 14417920 Duplicate Blocks Found: 13107200 Sorting Begin: Sun Nov 19 09:50:23 UTC 2023 Blocks Deduplicated: 11141098 Blocks Snapshot Crunched: 0 De-duping Begin: Sun Nov 19 09:50:49 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 14417920 Same FP Count: 13107200 Same FBN: 0 Same Data: 11141098 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 8432464 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 11872 Time since Last Inactive Data Compression Scan ended(sec): 11773 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 11773 Average time for Cold Data Compression(sec): 31 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 97% ::*> volume show -volume vol1_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol1_dst 91.85GB 0 9.15GB 91.85GB 87.26GB 78.10GB 89% 56.83GB 42% 56.83GB 42% 40.91GB 0B 0% 134.7GB 154% - 102.1GB 0B 0%
Progress: 22286364 KB (42%) Done
とTSSEが動作しています。
また、重複排除によるデータ削減量も35.57GBから56.83GBに大幅に増えています。
TSSEの様子を見守ります。
::*> volume efficiency show -volume vol1_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol1_dst Enabled Active 28785128 KB (54%) Done - ::*> volume efficiency show -volume vol1_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol1_dst Enabled Active 36842428 KB (70%) Done - ::*> volume efficiency show -volume vol1_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol1_dst Enabled Active 42023712 KB (80%) Done - ::*> volume efficiency show -volume vol1_dst -instance Vserver Name: svm2 Volume Name: vol1_dst Volume Path: /vol/vol1_dst State: Enabled Auto State: - Status: Active Progress: 42243384 KB (80%) Done Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Sat Nov 18 01:43:42 2023 Last Success Operation End: Sat Nov 18 01:52:03 2023 Last Operation Begin: Sat Nov 18 01:43:42 2023 Last Operation End: Sat Nov 18 01:52:03 2023 Last Operation Size: 29.34GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 700MB Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 102.3GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Saving Checkpoint Time: Sun Nov 19 09:51:08 UTC 2023 Checkpoint Operation Type: Start Checkpoint Stage: Saving_sharing Checkpoint Substage: - Checkpoint Progress: 0 KB (0%) Done Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 14417920 Duplicate Blocks Found: 13107200 Sorting Begin: Sun Nov 19 09:50:23 UTC 2023 Blocks Deduplicated: 21119354 Blocks Snapshot Crunched: 0 De-duping Begin: Sun Nov 19 09:50:49 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 14417920 Same FP Count: 13107200 Same FBN: 0 Same Data: 21119354 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency show -volume vol1_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol1_dst Enabled Active Inode:103 of 32774, curr_fbn: 1040400 of max_fbn: 1310719 - ::*> volume efficiency show -volume vol1_dst -instance Vserver Name: svm2 Volume Name: vol1_dst Volume Path: /vol/vol1_dst State: Enabled Auto State: - Status: Active Progress: Idle for 00:00:00 Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Sat Nov 18 01:43:42 2023 Last Success Operation End: Sat Nov 18 01:52:03 2023 Last Operation Begin: Sat Nov 18 01:43:42 2023 Last Operation End: Sat Nov 18 01:52:03 2023 Last Operation Size: 29.34GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 102.5GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 14417920 Duplicate Blocks Found: 13107200 Sorting Begin: Sun Nov 19 09:50:23 UTC 2023 Blocks Deduplicated: 26214400 Blocks Snapshot Crunched: 0 De-duping Begin: Sun Nov 19 09:50:49 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 14417920 Same FP Count: 13107200 Same FBN: 0 Same Data: 26214400 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: Sun Nov 19 09:59:53 UTC 2023 Number of L1s processed by compression phase: 77049 Number of indirect blocks skipped by compression phase: L1: 25751 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true
約10分で30GBほどのデータに対して処理をしたようです。
TSSE動作完了後のaggregateとボリュームの情報を確認します。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 808.5GB 6% online 3 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 52.26GB 6% Aggregate Metadata 1010MB 0% Snapshot Reserve 45.36GB 5% Total Used 98.60GB 11% Total Physical Used 73.61GB 8% Total Provisioned Space 36.82GB 4% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst Vserver : svm2 Volume : vol1_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 31.20GB 3% Footprint in Performance Tier 51.21GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 0B 0% Deduplication Metadata 30.07MB 0% Deduplication 30.07MB 0% Delayed Frees 20.01GB 2% File Operation Metadata 4KB 0% Total Footprint 51.24GB 6% Effective Total Footprint 51.24GB 6% ::*> volume show-footprint -volume vol1_dst -instance Vserver: svm2 Volume Name: vol1_dst Volume MSID: 2157478065 Volume DSID: 1027 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 30.07MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 31.20GB Volume Data Footprint Percent: 3% Flexible Volume Metadata Footprint: 0B Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 20.01GB Delayed Free Blocks Percent: 2% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 51.24GB Total Footprint Percent: 6% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 51.21GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 30.07MB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 51.24GB Effective Total after Footprint Data Reduction Percent: 6% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 6.35:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 4.15:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 4.15:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 251.9GB Total Physical Used: 39.64GB Total Storage Efficiency Ratio: 6.35:1 Total Data Reduction Logical Used Without Snapshots: 100.2GB Total Data Reduction Physical Used Without Snapshots: 24.16GB Total Data Reduction Efficiency Ratio Without Snapshots: 4.15:1 Total Data Reduction Logical Used without snapshots and flexclones: 100.2GB Total Data Reduction Physical Used without snapshots and flexclones: 24.16GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 4.15:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 253.0GB Total Physical Used in FabricPool Performance Tier: 40.91GB Total FabricPool Performance Tier Storage Efficiency Ratio: 6.18:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 101.3GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 25.42GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.98:1 Logical Space Used for All Volumes: 100.2GB Physical Space Used for All Volumes: 14.65GB Space Saved by Volume Deduplication: 85.57GB Space Saved by Volume Deduplication and pattern detection: 85.57GB Volume Deduplication Savings ratio: 6.84:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 6.84:1 Logical Space Used by the Aggregate: 39.64GB Physical Space Used by the Aggregate: 39.64GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 151.7GB Physical Size Used by Snapshot Copies: 15.48GB Snapshot Volume Data Reduction Ratio: 9.80:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 9.80:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -volume vol1_dst -instance Vserver Name: svm2 Volume Name: vol1_dst Volume Path: /vol/vol1_dst State: Enabled Auto State: - Status: Idle Progress: Idle for 00:10:32 Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Sun Nov 19 09:50:07 2023 Last Success Operation End: Sun Nov 19 10:00:07 2023 Last Operation Begin: Sun Nov 19 09:50:07 2023 Last Operation End: Sun Nov 19 10:00:07 2023 Last Operation Size: 50GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 101.3GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 14417920 Duplicate Blocks Found: 13107200 Sorting Begin: Sun Nov 19 09:50:23 UTC 2023 Blocks Deduplicated: 26214400 Blocks Snapshot Crunched: 0 De-duping Begin: Sun Nov 19 09:50:49 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 14417920 Same FP Count: 13107200 Same FBN: 0 Same Data: 26214400 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: Sun Nov 19 09:59:53 UTC 2023 Number of L1s processed by compression phase: 77049 Number of indirect blocks skipped by compression phase: L1: 25751 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 1096 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 625 Time since Last Inactive Data Compression Scan ended(sec): 613 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 613 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 97% ::*> volume show -volume vol1_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol1_dst 34.82GB 0 3.62GB 34.82GB 33.08GB 29.46GB 89% 85.57GB 74% 85.57GB 74% 12.17GB 0B 0% 114.8GB 347% - 101.1GB 0B 0% ::*> snapshot show -volume vol1_dst ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol1_dst snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 15.48GB 44% 50% snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-19_094232 180KB 0% 0% 2 entries were displayed. ::*> snapshot show -volume vol1_dst -instance Vserver: svm2 Volume: vol1_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-17_105810 Snapshot Data Set ID: 4294968323 Snapshot Master Data Set ID: 6452445361 Creation Time: Fri Nov 17 10:58:10 2023 Snapshot Busy: false List of Owners: - Snapshot Size: 15.48GB Percentage of Total Blocks: 44% Percentage of Used Blocks: 50% Consistency Point Count: 106 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 1 Logical Snap ID: 1 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror, SMDeleteMe=snapmirror Instance UUID: c1227bc5-70df-4d9b-8e96-e27e12bbe28c Version UUID: ec783025-b254-405d-a435-8dc3102ba291 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 30.00GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 20.54GB VBN Zero Savings from Snapshot: 0B Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 50.55GB Performance Metadata from Snapshot: 1.30MB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false Vserver: svm2 Volume: vol1_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478065.2023-11-19_094232 Snapshot Data Set ID: 12884902915 Snapshot Master Data Set ID: 15042379953 Creation Time: Sun Nov 19 09:42:32 2023 Snapshot Busy: true List of Owners: snapmirror Snapshot Size: 180KB Percentage of Total Blocks: 0% Percentage of Used Blocks: 0% Consistency Point Count: 390 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 3 Logical Snap ID: 3 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror Instance UUID: ea510629-62db-45ce-828f-026578e874f2 Version UUID: 1e3adb71-da0b-4e58-b7d9-e2d8a234a3b8 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 15.72GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 85.37GB VBN Zero Savings from Snapshot: 0B Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 101.1GB Performance Metadata from Snapshot: 10.30MB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false 2 entries were displayed.
要点は以下のとおりです。
Logical Size Used by Snapshot Copies: 151.7GB
やLogical Space Used for All Volumes: 100.2GB
と論理サイズは増えている- ただし、
Physical Size Used by Snapshot Copies: 15.48GB
やPhysical Space Used for All Volumes: 14.65GB
とSnapMirrorの再同期前後と比較して物理サイズは変わっていない - SnapMirror再同期で転送されたSnapshotのサイズが180KBと非常に小さい
- つまりは、AFSとの差分がほとんどない
- 重複排除によって削減されたデータ量が35.57GBから85.57GBと、転送した50GB分増えている
これは再同期時に追加されたファイルが全て最初に転送したSnapshotに含まれるファイルと同じであるため、このように重複排除されたのだと考えます。
Snapshotで保持しているデータブロックは重複排除によって解放されないと紹介しましたが、SnapMirrorで転送されたSnapshot間で行われる重複排除ではデータブロックを解放してくれるようです。
SnapMirrorの転送先ボリュームでInactive data compression の動作確認
SnapMirrorの転送先ボリュームでInactive data compression の動作確認を行います。
重複排除によって再同期で転送したデータブロックのほとんどが重複排除されてしまったので、同じ結末を迎えるような気はしています。
SnapMirror後の重複排除が完了してから1日経っても、以下のように圧縮されたデータブロックはありませんでした。
::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 12050 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 97%
aggregateやボリュームの情報も変わりありませんでした。
手動でInactive data compressionを実行します。
::*> volume efficiency inactive-data-compression start -volume vol1_dst Inactive data compression scan started on volume "vol1_dst" in Vserver "svm2" ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 6% Phase1 L1s Processed: 71251 Phase1 Lns Skipped: L1: 25939 L2: 22 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 25127376 Phase2 Blocks Processed: 1579203 Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 15 Time since Last Inactive Data Compression Scan ended(sec): 3 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 3 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 97% ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 9% Phase1 L1s Processed: 71251 Phase1 Lns Skipped: L1: 25939 L2: 22 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 25127376 Phase2 Blocks Processed: 2285511 Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 17 Time since Last Inactive Data Compression Scan ended(sec): 5 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 5 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 97% ::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance Volume: vol1_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 192400 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 40 Time since Last Inactive Data Compression Scan ended(sec): 14 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 14 Average time for Cold Data Compression(sec): 41 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 97%
やはり、Number of Compression Done Blocks
が0となってしまい、圧縮できていません。
これはテストファイルが/dev/urandomで生成したためなような気がしてきました。
再トライ (2回目)
テスト用ファイルの作成
テストファイルを変更して再々トライします。
ボリュームはvol2
と新しく用意しました。
テストファイルはAmazon Linux 2023の/usr
とします。このディレクトリを10個コピーします。
コピーする際は、できるだけ重複排除が事前にかからないように前にコピーしたファイルがキャパシティプールストレージにTieringされたことを確認しながら行います。
# マウントポイントの作成 $ sudo mkdir -p /mnt/fsxn/vol2 # vol2のマウント $ sudo mount -t nfs svm-04855fdf5ed7737a8.fs-0762660cbce3713bf.fsx.us-east-1.amazonaws.com:/vol2 /mnt/fsxn/vol2 # マウントできたことを確認 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-04855fdf5ed7737a8.fs-0762660cbce3713bf.fsx.us-east-1.amazonaws.com:/vol2 nfs4 16G 320K 16G 1% /mnt/fsxn/vol2 # /usr のコピー $ sudo cp -pr /usr /mnt/fsxn/vol2/usr1 $ sudo cp -pr /usr /mnt/fsxn/vol2/usr2 $ sudo cp -pr /usr /mnt/fsxn/vol2/usr3 $ sudo cp -pr /usr /mnt/fsxn/vol2/usr4 $ sudo cp -pr /usr /mnt/fsxn/vol2/usr5 $ sudo cp -pr /usr /mnt/fsxn/vol2/usr6 $ sudo cp -pr /usr /mnt/fsxn/vol2/usr7 $ sudo cp -pr /usr /mnt/fsxn/vol2/usr8 $ sudo cp -pr /usr /mnt/fsxn/vol2/usr9 $ sudo cp -pr /usr /mnt/fsxn/vol2/usr10 # /usr がコピーされたことを確認 $ ls -l /mnt/fsxn/vol2 total 40 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr1 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr10 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr2 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr3 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr4 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr5 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr6 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr7 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr8 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr9
テストファイル作成後のaggregateやボリュームの情報は以下のとおりです。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 856.9GB 1% online 3 FsxId0762660cbce raid0, 3713bf-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 4.06GB 0% Aggregate Metadata 4.05GB 0% Snapshot Reserve 45.36GB 5% Total Used 50.24GB 6% Total Physical Used 13.56GB 1% Total Provisioned Space 145GB 16% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 93.86GB - Logical Referenced Capacity 93.41GB - Logical Unreferenced Capacity 460.4MB - Space Saved by Storage Efficiency 7.71GB - Total Physical Used 86.15GB - 2 entries were displayed. ::*> volume show-footprint -volume vol2 Vserver : svm Volume : vol2 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 14.31GB 2% Footprint in Performance Tier 781.2MB 5% Footprint in FSxFabricpoolObjectStore 13.71GB 95% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 162.5MB 0% File Operation Metadata 4KB 0% Total Footprint 14.56GB 2% Footprint Data Reduction in capacity tier 1.10GB - Effective Total Footprint 13.46GB 1% ::*> volume show-footprint -volume vol2 -instance Vserver: svm Volume Name: vol2 Volume MSID: 2163879580 Volume DSID: 1027 Vserver UUID: 0d9b83f3-8520-11ee-84de-4b7ecb818153 Aggregate Name: aggr1 Aggregate UUID: 44857d47-851f-11ee-84de-4b7ecb818153 Hostname: FsxId0762660cbce3713bf-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: - Deduplication Footprint Percent: - Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 14.31GB Volume Data Footprint Percent: 2% Flexible Volume Metadata Footprint: 92.66MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 162.5MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 14.56GB Total Footprint Percent: 2% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 781.2MB Volume Footprint bin0 Percent: 5% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 13.71GB Volume Footprint bin1 Percent: 95% Total Deduplication Footprint: - Total Deduplication Footprint Percent: - Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: 1.10GB Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 13.46GB Effective Total after Footprint Data Reduction Percent: 1% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0762660cbce3713bf-01 Total Storage Efficiency Ratio: 2.48:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 1.31:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 1.31:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0762660cbce3713bf-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 215.9GB Total Physical Used: 87.21GB Total Storage Efficiency Ratio: 2.48:1 Total Data Reduction Logical Used Without Snapshots: 114.6GB Total Data Reduction Physical Used Without Snapshots: 87.21GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.31:1 Total Data Reduction Logical Used without snapshots and flexclones: 114.6GB Total Data Reduction Physical Used without snapshots and flexclones: 87.21GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.31:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 4.92GB Total Physical Used in FabricPool Performance Tier: 3.19GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.54:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 2.83GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.19GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 114.6GB Physical Space Used for All Volumes: 93.35GB Space Saved by Volume Deduplication: 21.22GB Space Saved by Volume Deduplication and pattern detection: 21.25GB Volume Deduplication Savings ratio: 1.23:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 35.86MB Volume Data Reduction SE Ratio: 1.23:1 Logical Space Used by the Aggregate: 90.43GB Physical Space Used by the Aggregate: 87.21GB Space Saved by Aggregate Data Reduction: 3.22GB Aggregate Data Reduction SE Ratio: 1.04:1 Logical Size Used by Snapshot Copies: 101.3GB Physical Size Used by Snapshot Copies: 1.61MB Snapshot Volume Data Reduction Ratio: 64595.19:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 64595.19:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> volume efficiency show -volume vol2 -instance Vserver Name: svm Volume Name: vol2 Volume Path: /vol/vol2 State: Enabled Auto State: Deprioritized Status: Idle Progress: Idle for 00:12:56 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: 13b698f2-8520-11ee-bfa4-f9928d2238a7 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Mon Nov 20 11:47:21 2023 Last Success Operation End: Mon Nov 20 11:49:40 2023 Last Operation Begin: Mon Nov 20 11:47:21 2023 Last Operation End: Mon Nov 20 11:49:40 2023 Last Operation Size: 1.67GB Last Operation Error: - Operation Frequency: Once approxmiately every 0 day(s) and 5 hour(s) Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 14.77GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 820231 Duplicate Blocks Found: 437650 Sorting Begin: Mon Nov 20 11:47:21 UTC 2023 Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: Mon Nov 20 11:47:27 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 820231 Same FP Count: 437650 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 437650 Stale Donor Count: 437650 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol2 -instance Volume: vol2 Vserver: svm Is Enabled: false Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: lzopro Failure Reason: Inactive data compression disabled on volume Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 6562 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume show -volume vol2 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 16GB 0 908.6MB 16GB 15.20GB 14.31GB 94% 470.2MB 3% 470.2MB 3% 419.9MB 0B 0% 14.77GB 97% - 14.77GB - -
470.2MBほど重複排除によって削除されていますが、良いとします。
SnapMirrorの転送
SnapMirrorによる転送を行います。
まず、SnapMirror relastionshipを作成します。
::*> snapmirror protect -path-list svm:vol2 -destination-vserver svm2 -policy MirrorAllSnapshots -auto-initialize false -support-tiering true -tiering-policy none [Job 92] Job is queued: snapmirror protect for list of source endpoints beginning with "svm:vol2". ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm:vol2 XDP svm2:vol2_dst Uninitialized Idle - true - 2 entries were displayed.
作成されたボリュームの設定値はvol1_dst
の時と同じであり、特筆すべき点はありませんでした。
SnapMirror relationshipの初期化を行います。
::*> snapmirror initialize -destination-path svm2:vol2_dst -source-path svm:vol2 Operation is queued: snapmirror initialize of destination "svm2:vol2_dst". ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm:vol2 XDP svm2:vol2_dst Uninitialized Transferring 7.10GB true 11/20 12:14:54 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm:vol2 XDP svm2:vol2_dst Uninitialized Finalizing 14.68GB true 11/20 12:15:41 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm:vol2 XDP svm2:vol2_dst Snapmirrored Idle - true - 2 entries were displayed. ::*> snapmirror show -destination-path svm2:vol2_dst Source Path: svm:vol2 Source Cluster: - Source Vserver: svm Source Volume: vol2 Destination Path: svm2:vol2_dst Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol2_dst Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_121317 Newest Snapshot Timestamp: 11/20 12:13:17 Exported Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_121317 Exported Snapshot Timestamp: 11/20 12:13:17 Healthy: true Relationship ID: 3f31c76e-879d-11ee-b677-8751a02f6bb7 Source Vserver UUID: 0d9b83f3-8520-11ee-84de-4b7ecb818153 Destination Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: update Last Transfer Error: - Last Transfer Error Codes: - Last Transfer Size: 0B Last Transfer Network Compression Ratio: 1:1 Last Transfer Duration: 0:0:0 Last Transfer From: svm:vol2 Last Transfer End Timestamp: 11/20 12:15:50 Unhealthy Reason: - Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 0:3:2 Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0648fddba7bd041af-01 Identity Preserve Vserver DR: - Number of Successful Updates: 1 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 15760474778 Total Transfer Time in Seconds: 153 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
SnapMirror relationshipの初期化後、aggregateやボリュームの情報を確認します。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 813.7GB 6% online 4 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 47.72GB 5% Aggregate Metadata 319.6MB 0% Snapshot Reserve 45.36GB 5% Total Used 93.39GB 10% Total Physical Used 58.26GB 6% Total Provisioned Space 54.75GB 6% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol2_dst Vserver : svm2 Volume : vol2_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 14.38GB 2% Footprint in Performance Tier 14.55GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 102.6MB 0% Delayed Frees 175.8MB 0% File Operation Metadata 4KB 0% Total Footprint 14.65GB 2% Effective Total Footprint 14.65GB 2% ::*> volume show-footprint -volume vol2_dst -instance Vserver: svm2 Volume Name: vol2_dst Volume MSID: 2157478066 Volume DSID: 1028 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: - Deduplication Footprint Percent: - Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 14.38GB Volume Data Footprint Percent: 2% Flexible Volume Metadata Footprint: 102.6MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 175.8MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 14.65GB Total Footprint Percent: 2% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 14.55GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: - Total Deduplication Footprint Percent: - Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 14.65GB Effective Total after Footprint Data Reduction Percent: 2% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 6.35:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 3.99:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 3.99:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 280.8GB Total Physical Used: 44.20GB Total Storage Efficiency Ratio: 6.35:1 Total Data Reduction Logical Used Without Snapshots: 114.4GB Total Data Reduction Physical Used Without Snapshots: 28.66GB Total Data Reduction Efficiency Ratio Without Snapshots: 3.99:1 Total Data Reduction Logical Used without snapshots and flexclones: 114.4GB Total Data Reduction Physical Used without snapshots and flexclones: 28.66GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 3.99:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 282.5GB Total Physical Used in FabricPool Performance Tier: 46.13GB Total FabricPool Performance Tier Storage Efficiency Ratio: 6.12:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 116.1GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 30.59GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.79:1 Logical Space Used for All Volumes: 114.4GB Physical Space Used for All Volumes: 28.34GB Space Saved by Volume Deduplication: 86.00GB Space Saved by Volume Deduplication and pattern detection: 86.03GB Volume Deduplication Savings ratio: 4.04:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 35.86MB Volume Data Reduction SE Ratio: 4.04:1 Logical Space Used by the Aggregate: 44.20GB Physical Space Used by the Aggregate: 44.20GB Space Saved by Aggregate Data Reduction: 156KB Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 166.5GB Physical Size Used by Snapshot Copies: 15.54GB Snapshot Volume Data Reduction Ratio: 10.71:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 10.71:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> volume efficiency show -volume vol2_dst -instance Vserver Name: svm2 Volume Name: vol2_dst Volume Path: /vol/vol2_dst State: Disabled Auto State: - Status: Idle Progress: Idle for 00:00:00 Type: Regular Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: 0 Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: - Last Success Operation End: - Last Operation Begin: - Last Operation End: Mon Nov 20 12:18:38 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 0B Logical Data Limit: 4KB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: - Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 0 Duplicate Blocks Found: 0 Sorting Begin: - Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: - Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 1 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 0 Same FP Count: 0 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: false Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 302 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume show -volume vol2_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol2_dst 17.93GB 0 2.71GB 17.93GB 17.04GB 14.32GB 84% 470.2MB 3% 470.2MB 3% 13.71GB 0B 0% 14.73GB 86% - 14.73GB 0B 0% ::*> snapshot show -volume vol2_dst ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol2_dst snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_121317 59.16MB 0% 0% ::*> snapshot show -volume vol2_dst -instance Vserver: svm2 Volume: vol2_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_121317 Snapshot Data Set ID: 4294968324 Snapshot Master Data Set ID: 6452445362 Creation Time: Mon Nov 20 12:13:17 2023 Snapshot Busy: true List of Owners: snapmirror Snapshot Size: 59.16MB Percentage of Total Blocks: 0% Percentage of Used Blocks: 0% Consistency Point Count: 62 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 1 Logical Snap ID: 1 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror, SMDeleteMe=snapmirror Instance UUID: a3400b15-981d-4691-8b54-22d3ec1abec3 Version UUID: 7c58db4f-5d0f-454e-9063-a96df1f1670e 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 14.38GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 376.0MB VBN Zero Savings from Snapshot: 35.86MB Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 14.78GB Performance Metadata from Snapshot: 852KB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false
要点は以下のとおりです。vol1_dst
と変わりありません。
- 転送元ボリュームの重複排除や圧縮の内容が維持されている
- 転送先ボリュームのStorage Efficiency Modeが
efficient
になった- ただし、ステータスはdisableのまま
- Inactive data compressionも有効になった
転送先ボリュームでTSSEとInactive data compressionの有効
転送先のボリュームでTSSEとInactive data compressionの有効化を行います。
::*> volume efficiency on -volume vol2_dst Efficiency for volume "vol2_dst" of Vserver "svm2" is enabled. ::*> volume efficiency inactive-data-compression modify -volume vol2_dst -is-enabled true -threshold-days 1 -threshold-days-min 1 -threshold-days-max 1
テスト用ファイルの追加
このタイミングで転送先ボリュームで手動でTSSEやInactive data compressionをしても良いですが、SnapMirrorで転送されたSnapshot間で重複排除された時に、自動で重複排除の処理が走ったことが気になりました。
最初に転送したSnapshotと重複しないデータブロックを保持している場合についても、自動で重複排除の処理が走ろうとするのか確認してみます。
重複排除処理がかかるのであれば、再同期したSnapshot内での重複排除が効くのではないかという淡い期待もしています。
テスト用ファイルとして/etc
をvol2
にコピーします。
$ for i in `seq 1 100 `; do sudo cp -rp /etc /mnt/fsxn/vol2/etc"$i" sleep 5 done; cp: cannot create regular file '/mnt/fsxn/vol2/etc49/pki/ca-trust/extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068_1.pem': No space left on device cp: cannot create symbolic link '/mnt/fsxn/vol2/etc49/pki/ca-trust/extracted/pem/directory-hash/3bde41ac.1': No space left on device cp: cannot create symbolic link '/mnt/fsxn/vol2/etc49/pki/ca-trust/extracted/pem/directory-hash/626dceaf.0': No space left on device cp: cannot create symbolic link '/mnt/fsxn/vol2/etc49/pki/ca-trust/extracted/pem/directory-hash/c559d742.0': No space left on device cp: cannot create regular file '/mnt/fsxn/vol2/etc49/pki/ca-trust/extracted/pem/directory-hash/GTS_Root_R3.pem': No space left on device
vol2
のボリュームサイズを拡張していなかったのでディスクフルとなってしまいました。
vol2
のボリューム情報とSnapshotの情報を確認しておきます。
::> volume show -volume vol2 -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 16GB 0 69.82MB 16GB 15.20GB 15.13GB 99% 638.9MB 4% 638.9MB 4% 457.9MB 0B 0% 15.75GB 104% - 15.75GB - - ::> snapshot show -volume vol2 ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm vol2 snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_121317 52.34MB 0% 0% ::> snapshot show -volume vol2 -instance Vserver: svm Volume: vol2 Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_121317 Creation Time: Mon Nov 20 12:13:17 2023 Snapshot Busy: false List of Owners: - Snapshot Size: 52.34MB Percentage of Total Blocks: 0% Percentage of Used Blocks: 0% Comment: - 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Expiry Time: - SnapLock Expiry Time: -
SnapMirrorの差分転送
それではSnapMirrorの再同期を行います。
::*> snapmirror update -destination-path svm2:vol2_dst Operation is queued: snapmirror update of destination "svm2:vol2_dst". ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm:vol2 XDP svm2:vol2_dst Snapmirrored Transferring 412.3MB true 11/20 12:50:41 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm:vol2 XDP svm2:vol2_dst Snapmirrored Transferring 567.9MB true 11/20 12:50:57 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm:vol2 XDP svm2:vol2_dst Snapmirrored Idle - true - 2 entries were displayed. ::*> snapmirror show -destination-path svm2:vol2_dst Source Path: svm:vol2 Source Cluster: - Source Vserver: svm Source Volume: vol2 Destination Path: svm2:vol2_dst Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol2_dst Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 Newest Snapshot Timestamp: 11/20 12:50:27 Exported Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 Exported Snapshot Timestamp: 11/20 12:50:27 Healthy: true Relationship ID: 3f31c76e-879d-11ee-b677-8751a02f6bb7 Source Vserver UUID: 0d9b83f3-8520-11ee-84de-4b7ecb818153 Destination Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: update Last Transfer Error: - Last Transfer Error Codes: - Last Transfer Size: 1017MB Last Transfer Network Compression Ratio: 1:1 Last Transfer Duration: 0:1:3 Last Transfer From: svm:vol2 Last Transfer End Timestamp: 11/20 12:51:30 Unhealthy Reason: - Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 0:1:19 Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0648fddba7bd041af-01 Identity Preserve Vserver DR: - Number of Successful Updates: 2 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 16827721414 Total Transfer Time in Seconds: 216 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
1GBほど転送されました。
SnapMirrorの再同期完了後、aggregateやボリュームの情報を確認します。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 813.5GB 6% online 4 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 47.94GB 5% Aggregate Metadata 320.3MB 0% Snapshot Reserve 45.36GB 5% Total Used 93.61GB 10% Total Physical Used 59.32GB 7% Total Provisioned Space 54.91GB 6% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol2_dst Vserver : svm2 Volume : vol2_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 14.49GB 2% Footprint in Performance Tier 14.77GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 108.3MB 0% Deduplication Metadata 292KB 0% Deduplication 292KB 0% Delayed Frees 280.1MB 0% File Operation Metadata 4KB 0% Total Footprint 14.87GB 2% Effective Total Footprint 14.87GB 2% ::*> volume show-footprint -volume vol2_dst -instance Vserver: svm2 Volume Name: vol2_dst Volume MSID: 2157478066 Volume DSID: 1028 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 292KB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 14.49GB Volume Data Footprint Percent: 2% Flexible Volume Metadata Footprint: 108.3MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 280.1MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 14.87GB Total Footprint Percent: 2% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 14.77GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 292KB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 14.87GB Effective Total after Footprint Data Reduction Percent: 2% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 6.77:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 4.05:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 4.05:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 297.3GB Total Physical Used: 43.92GB Total Storage Efficiency Ratio: 6.77:1 Total Data Reduction Logical Used Without Snapshots: 115.0GB Total Data Reduction Physical Used Without Snapshots: 28.38GB Total Data Reduction Efficiency Ratio Without Snapshots: 4.05:1 Total Data Reduction Logical Used without snapshots and flexclones: 115.0GB Total Data Reduction Physical Used without snapshots and flexclones: 28.38GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 4.05:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 299.3GB Total Physical Used in FabricPool Performance Tier: 46.24GB Total FabricPool Performance Tier Storage Efficiency Ratio: 6.47:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 117.1GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 30.70GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.81:1 Logical Space Used for All Volumes: 115.0GB Physical Space Used for All Volumes: 28.07GB Space Saved by Volume Deduplication: 86.92GB Space Saved by Volume Deduplication and pattern detection: 86.96GB Volume Deduplication Savings ratio: 4.10:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 35.86MB Volume Data Reduction SE Ratio: 4.10:1 Logical Space Used by the Aggregate: 43.92GB Physical Space Used by the Aggregate: 43.92GB Space Saved by Aggregate Data Reduction: 176KB Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 182.2GB Physical Size Used by Snapshot Copies: 15.54GB Snapshot Volume Data Reduction Ratio: 11.72:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 11.72:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -volume vol2_dst -instance Vserver Name: svm2 Volume Name: vol2_dst Volume Path: /vol/vol2_dst State: Enabled Auto State: - Status: Idle Progress: Idle for 00:00:50 Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Mon Nov 20 12:51:30 2023 Last Success Operation End: Mon Nov 20 12:51:57 2023 Last Operation Begin: Mon Nov 20 12:51:30 2023 Last Operation End: Mon Nov 20 12:51:57 2023 Last Operation Size: 830.5MB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 15.82GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 212610 Duplicate Blocks Found: 200604 Sorting Begin: Mon Nov 20 12:51:31 UTC 2023 Blocks Deduplicated: 401082 Blocks Snapshot Crunched: 0 De-duping Begin: Mon Nov 20 12:51:31 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 212610 Same FP Count: 200604 Same FBN: 0 Same Data: 401082 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 63 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: false Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: lzopro Failure Reason: Inactive data compression disabled on volume Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 1087 Time since Last Inactive Data Compression Scan ended(sec): 1083 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 1083 Average time for Cold Data Compression(sec): 4 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume show -volume vol2_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol2_dst 18.09GB 0 2.75GB 18.09GB 17.18GB 14.43GB 84% 1.39GB 9% 1.39GB 9% 13.75GB 0B 0% 15.76GB 92% - 15.76GB 0B 0% ::*> snapshot show -volume vol2_dst ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol2_dst snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_121317 60.23MB 0% 0% snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 144KB 0% 0% 2 entries were displayed. ::*> snapshot show -volume vol2_dst -instance Vserver: svm2 Volume: vol2_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_121317 Snapshot Data Set ID: 4294968324 Snapshot Master Data Set ID: 6452445362 Creation Time: Mon Nov 20 12:13:17 2023 Snapshot Busy: false List of Owners: - Snapshot Size: 60.23MB Percentage of Total Blocks: 0% Percentage of Used Blocks: 0% Consistency Point Count: 62 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 1 Logical Snap ID: 1 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror, SMDeleteMe=snapmirror Instance UUID: a3400b15-981d-4691-8b54-22d3ec1abec3 Version UUID: 7c58db4f-5d0f-454e-9063-a96df1f1670e 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 14.38GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 376.0MB VBN Zero Savings from Snapshot: 35.86MB Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 14.78GB Performance Metadata from Snapshot: 852KB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false Vserver: svm2 Volume: vol2_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 Snapshot Data Set ID: 12884902916 Snapshot Master Data Set ID: 15042379954 Creation Time: Mon Nov 20 12:50:27 2023 Snapshot Busy: true List of Owners: snapmirror Snapshot Size: 144KB Percentage of Total Blocks: 0% Percentage of Used Blocks: 0% Consistency Point Count: 91 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 3 Logical Snap ID: 3 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror Instance UUID: 64454ce7-7d31-47d3-b1a7-f56d61f485df Version UUID: f2164c11-eff0-4582-935c-8431d20c24c0 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 14.43GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 1.29GB VBN Zero Savings from Snapshot: 35.86MB Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 15.76GB Performance Metadata from Snapshot: 456KB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false 2 entries were displayed.
要点は以下のとおりです。
- SnapMirrorの転送完了後にTSSEが動作している
- TSSEにより830.5MBのデータが処理対象となっている
- 転送元ボリュームの
sis-space-saved
が638.9MBに関わらず、転送先ボリュームは1.39GB
であることから転送先で追加の重複排除が効いていることが読み取れる - SnapMirrorの再同期前後と比較して
Total Physical Used
は大きな変動はしていない - 有効にしていたInactive data compressionが無効化されている
どうやらSnapMirrorの再同期後は自動で転送されたSnapshotに対してTSSEが効くようです。これはありがたい挙動ですね。
ただし、気になるのは「有効にしていたInactive data compressionが無効化されている」点です。
もしかしたらSnapMirrorの転送元のステータスの影響を受けるかもしれません。
実験してみましょう。
vol2のInactive data compressionを有効化します。
::*> volume efficiency inactive-data-compression modify -volume vol2 -is-enabled true ::*> volume efficiency inactive-data-compression show -instance -volume vol2 Volume: vol2 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: lzopro Failure Reason: Inactive data compression disabled on volume Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 9675 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
有効になりました。
この状態で再転送します。
::*> snapmirror update -destination-path svm2:vol2_dstOperation is queued: snapmirror update of destination "svm2:vol2_dst". ::*> snapmirror show -destination-path svm2:vol2_dst Source Path: svm:vol2 Source Cluster: - Source Vserver: svm Source Volume: vol2 Destination Path: svm2:vol2_dst Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol2_dst Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703 Newest Snapshot Timestamp: 11/20 12:57:03 Exported Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703 Exported Snapshot Timestamp: 11/20 12:57:03 Healthy: true Relationship ID: 3f31c76e-879d-11ee-b677-8751a02f6bb7 Source Vserver UUID: 0d9b83f3-8520-11ee-84de-4b7ecb818153 Destination Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: update Last Transfer Error: - Last Transfer Error Codes: - Last Transfer Size: 3.27KB Last Transfer Network Compression Ratio: 1:1 Last Transfer Duration: 0:0:3 Last Transfer From: svm:vol2 Last Transfer End Timestamp: 11/20 12:57:06 Unhealthy Reason: - Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 0:0:52 Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0648fddba7bd041af-01 Identity Preserve Vserver DR: - Number of Successful Updates: 3 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 16827724766 Total Transfer Time in Seconds: 219 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
再転送完了後、転送先ボリュームのInactive data compressionのステータスを確認します。
::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: lzopro Failure Reason: Inactive data compression disabled on volume Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 69 Time since Last Inactive Data Compression Scan ended(sec): 49 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 49 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
trueになっていますね。そのため、Inactive data compressionはSnapMirrorの転送元のステータスの影響を受けるようです。
ちなみに、MirrorAllSnapshots
のSnapMirror relastionsipでSnapMirrorの再同期を行なったため、SnapMirror relastionshipで転送されたSnapshotは削除されています。
::*> snapshot show -volume vol2_dst ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol2_dst snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 960KB 0% 0% snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703 156KB 0% 0% 2 entries were displayed. ::*> snapshot show -volume vol2_dst -instance Vserver: svm2 Volume: vol2_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 Snapshot Data Set ID: 12884902916 Snapshot Master Data Set ID: 15042379954 Creation Time: Mon Nov 20 12:50:27 2023 Snapshot Busy: false List of Owners: - Snapshot Size: 960KB Percentage of Total Blocks: 0% Percentage of Used Blocks: 0% Consistency Point Count: 91 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 3 Logical Snap ID: 3 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror Instance UUID: 64454ce7-7d31-47d3-b1a7-f56d61f485df Version UUID: f2164c11-eff0-4582-935c-8431d20c24c0 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 14.43GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 1.29GB VBN Zero Savings from Snapshot: 35.86MB Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 15.76GB Performance Metadata from Snapshot: 456KB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false Vserver: svm2 Volume: vol2_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703 Snapshot Data Set ID: 21474837508 Snapshot Master Data Set ID: 23632314546 Creation Time: Mon Nov 20 12:57:03 2023 Snapshot Busy: true List of Owners: snapmirror Snapshot Size: 156KB Percentage of Total Blocks: 0% Percentage of Used Blocks: 0% Consistency Point Count: 110 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 5 Logical Snap ID: 5 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror Instance UUID: 93c89d51-467e-42d2-849a-0b6e9aa77371 Version UUID: e41071e5-69e7-42b3-bdc4-6279ed7ff522 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 14.43GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 1.29GB VBN Zero Savings from Snapshot: 35.86MB Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 15.76GB Performance Metadata from Snapshot: 180KB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false 2 entries were displayed. ::*> volume show -volume vol2_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol2_dst 18.09GB 0 2.75GB 18.09GB 17.18GB 14.43GB 84% 1.39GB 9% 1.39GB 9% 13.75GB 0B 0% 15.76GB 92% - 15.76GB 0B 0%
SnapMirrorの転送先ボリュームで手動でTSSEの実行
さらに手動でTSSEを実行します。
::*> volume efficiency start -volume vol2_dst Error: command failed: Failed to start efficiency on volume "vol2_dst" of Vserver "svm2": Invalid operation on a SnapVault secondary volume. ::*> volume efficiency start -volume vol2_dst -scan-old-data Warning: This operation scans all of the data in volume "vol2_dst" of Vserver "svm2". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol2_dst" of Vserver "svm2" has started. ::*> volume efficiency show -volume vol2_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol2_dst Enabled Active 8786212 KB Scanned - ::*> volume efficiency show -volume vol2_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol2_dst Enabled Active 12087712 KB Scanned - ::*> volume efficiency show -volume vol2_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol2_dst Enabled Active 28491184 KB (95%) Done - ::*> volume efficiency show -volume vol2_dst -instance Vserver Name: svm2 Volume Name: vol2_dst Volume Path: /vol/vol2_dst State: Enabled Auto State: - Status: Active Progress: 28672756 KB (96%) Done Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Mon Nov 20 12:57:06 2023 Last Success Operation End: Mon Nov 20 12:57:27 2023 Last Operation Begin: Mon Nov 20 12:57:06 2023 Last Operation End: Mon Nov 20 12:57:27 2023 Last Operation Size: 252KB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 300KB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 15.84GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Saving Checkpoint Time: Tue Nov 21 10:36:38 UTC 2023 Checkpoint Operation Type: Scan Checkpoint Stage: Saving_sharing Checkpoint Substage: - Checkpoint Progress: 0 KB (0%) Done Fingerprints Gathered: 7766581 Blocks Processed For Compression: 0 Gathering Begin: Tue Nov 21 10:33:07 UTC 2023 Gathering Phase 2 Begin: Tue Nov 21 10:35:33 UTC 2023 Fingerprints Sorted: 7766581 Duplicate Blocks Found: 7464147 Sorting Begin: Tue Nov 21 10:36:18 UTC 2023 Blocks Deduplicated: 6655051 Blocks Snapshot Crunched: 0 De-duping Begin: Tue Nov 21 10:36:26 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7766581 Same FP Count: 7464147 Same FBN: 0 Same Data: 6655051 No Op: 0 Same VBN: 504771 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 7989 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency show -volume vol2_dst Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol2_dst Enabled Idle Idle for 00:00:08 - ::*> volume efficiency show -volume vol2_dst -instance Vserver Name: svm2 Volume Name: vol2_dst Volume Path: /vol/vol2_dst State: Enabled Auto State: - Status: Idle Progress: Idle for 00:00:10 Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Tue Nov 21 10:33:07 2023 Last Success Operation End: Tue Nov 21 10:49:39 2023 Last Operation Begin: Tue Nov 21 10:33:07 2023 Last Operation End: Tue Nov 21 10:49:39 2023 Last Operation Size: 29.63GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 300KB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 15.84GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 7766581 Blocks Processed For Compression: 0 Gathering Begin: Tue Nov 21 10:33:07 UTC 2023 Gathering Phase 2 Begin: Tue Nov 21 10:35:33 UTC 2023 Fingerprints Sorted: 7766581 Duplicate Blocks Found: 7464147 Sorting Begin: Tue Nov 21 10:36:18 UTC 2023 Blocks Deduplicated: 6754145 Blocks Snapshot Crunched: 0 De-duping Begin: Tue Nov 21 10:36:26 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7766581 Same FP Count: 7464147 Same FBN: 0 Same Data: 6754145 No Op: 0 Same VBN: 701822 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 8142 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true
29.63GBに対して処理を行なったようです。
手動でTSSE実行した後のaggregateやボリュームの情報を確認します。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 813.3GB 6% online 4 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 48.18GB 5% Aggregate Metadata 323.3MB 0% Snapshot Reserve 45.36GB 5% Total Used 93.85GB 10% Total Physical Used 50.26GB 6% Total Provisioned Space 54.30GB 6% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol2_dst Vserver : svm2 Volume : vol2_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 14.85GB 2% Footprint in Performance Tier 15.00GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 108.3MB 0% Deduplication Metadata 6.95MB 0% Deduplication 6.95MB 0% Delayed Frees 148.2MB 0% File Operation Metadata 4KB 0% Total Footprint 15.11GB 2% Effective Total Footprint 15.11GB 2% ::*> volume show-footprint -volume vol2_dst -instance Vserver: svm2 Volume Name: vol2_dst Volume MSID: 2157478066 Volume DSID: 1028 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 6.95MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 14.85GB Volume Data Footprint Percent: 2% Flexible Volume Metadata Footprint: 108.3MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 148.2MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 15.11GB Total Footprint Percent: 2% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 15.00GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 6.95MB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: - Footprint Data Reduction by Auto Adaptive Compression Percent: - Total Footprint Data Reduction: - Total Footprint Data Reduction Percent: - Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 15.11GB Effective Total after Footprint Data Reduction Percent: 2% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 6.74:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 7.09:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 7.09:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 298.2GB Total Physical Used: 44.23GB Total Storage Efficiency Ratio: 6.74:1 Total Data Reduction Logical Used Without Snapshots: 115.0GB Total Data Reduction Physical Used Without Snapshots: 16.22GB Total Data Reduction Efficiency Ratio Without Snapshots: 7.09:1 Total Data Reduction Logical Used without snapshots and flexclones: 115.0GB Total Data Reduction Physical Used without snapshots and flexclones: 16.22GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 7.09:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 300.3GB Total Physical Used in FabricPool Performance Tier: 46.62GB Total FabricPool Performance Tier Storage Efficiency Ratio: 6.44:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 117.1GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 18.60GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 6.30:1 Logical Space Used for All Volumes: 115.0GB Physical Space Used for All Volumes: 15.90GB Space Saved by Volume Deduplication: 99.05GB Space Saved by Volume Deduplication and pattern detection: 99.08GB Volume Deduplication Savings ratio: 7.23:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 36.78MB Volume Data Reduction SE Ratio: 7.23:1 Logical Space Used by the Aggregate: 44.23GB Physical Space Used by the Aggregate: 44.23GB Space Saved by Aggregate Data Reduction: 156KB Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 183.2GB Physical Size Used by Snapshot Copies: 28.01GB Snapshot Volume Data Reduction Ratio: 6.54:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 6.54:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -volume vol2_dst -instance Vserver Name: svm2 Volume Name: vol2_dst Volume Path: /vol/vol2_dst State: Enabled Auto State: - Status: Idle Progress: Idle for 00:01:40 Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Tue Nov 21 10:33:07 2023 Last Success Operation End: Tue Nov 21 10:49:39 2023 Last Operation Begin: Tue Nov 21 10:33:07 2023 Last Operation End: Tue Nov 21 10:49:39 2023 Last Operation Size: 29.63GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 300KB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 15.84GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 7766581 Blocks Processed For Compression: 0 Gathering Begin: Tue Nov 21 10:33:07 UTC 2023 Gathering Phase 2 Begin: Tue Nov 21 10:35:33 UTC 2023 Fingerprints Sorted: 7766581 Duplicate Blocks Found: 7464147 Sorting Begin: Tue Nov 21 10:36:18 UTC 2023 Blocks Deduplicated: 6754145 Blocks Snapshot Crunched: 0 De-duping Begin: Tue Nov 21 10:36:26 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7766581 Same FP Count: 7464147 Same FBN: 0 Same Data: 6754145 No Op: 0 Same VBN: 701822 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 8142 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Press <space> to page down, <return> for next line, or 'q' to quit... ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 76404 Time since Last Inactive Data Compression Scan ended(sec): 76399 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 76399 Average time for Cold Data Compression(sec): 4 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume show -volume vol2_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol2_dst 17.48GB 0 2.63GB 17.48GB 16.61GB 13.98GB 84% 13.51GB 49% 13.51GB 49% 1.59GB 0B 0% 27.43GB 165% - 15.78GB 0B 0% ::*> snapshot show -volume vol2_dst ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol2_dst snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 960KB 0% 0% snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703 12.53GB 72% 84% 2 entries were displayed. ::*> snapshot show -volume vol2_dst -instance Vserver: svm2 Volume: vol2_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 Snapshot Data Set ID: 12884902916 Snapshot Master Data Set ID: 15042379954 Creation Time: Mon Nov 20 12:50:27 2023 Snapshot Busy: false List of Owners: - Snapshot Size: 960KB Percentage of Total Blocks: 0% Percentage of Used Blocks: 0% Consistency Point Count: 91 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 3 Logical Snap ID: 3 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror Instance UUID: 64454ce7-7d31-47d3-b1a7-f56d61f485df Version UUID: f2164c11-eff0-4582-935c-8431d20c24c0 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 14.43GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 1.29GB VBN Zero Savings from Snapshot: 35.86MB Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 15.76GB Performance Metadata from Snapshot: 456KB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false Vserver: svm2 Volume: vol2_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703 Snapshot Data Set ID: 21474837508 Snapshot Master Data Set ID: 23632314546 Creation Time: Mon Nov 20 12:57:03 2023 Snapshot Busy: true List of Owners: snapmirror Snapshot Size: 12.53GB Percentage of Total Blocks: 72% Percentage of Used Blocks: 84% Consistency Point Count: 110 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 5 Logical Snap ID: 5 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror Instance UUID: 93c89d51-467e-42d2-849a-0b6e9aa77371 Version UUID: e41071e5-69e7-42b3-bdc4-6279ed7ff522 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 14.43GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 1.29GB VBN Zero Savings from Snapshot: 35.86MB Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 15.76GB Performance Metadata from Snapshot: 180KB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false 2 entries were displayed.
重複排除量が1.39GBから13.51GBに増加していますね。
12GB分重複排除によりAFSから削除されたことにより、Snapshotのサイズが12.53GBと12GB増えています。
Inactive data compressionの動作確認
SnapMirrorの転送先ボリュームでInactive data compressionが効くか確認します。
SnapMirrorして1日以上待ちましたが、特に変わりありません。
::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 43399 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
手動で実行してみましょう。
::*> volume efficiency inactive-data-compression start -volume vol2_dst Inactive data compression scan started on volume "vol2_dst" in Vserver "svm2" ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 52% Phase1 L1s Processed: 52807 Phase1 Lns Skipped: L1: 2 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 5543600 Phase2 Blocks Processed: 2900081 Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 18 Time since Last Inactive Data Compression Scan ended(sec): 3 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 3 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 24 Time since Last Inactive Data Compression Scan ended(sec): 19 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 19 Average time for Cold Data Compression(sec): 4 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
Number of Cold Blocks Encountered
が0であることからCold Dataとして判定されたデータブロックはないようです。
-inactive-days 1
を指定して、一日以上経過したものを圧縮するようにしてみます。
::*> volume efficiency inactive-data-compression start -volume vol2_dst -inactive-days 1 Inactive data compression scan started on volume "vol2_dst" in Vserver "svm2" ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 26 Time since Last Inactive Data Compression Scan ended(sec): 21 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 21 Average time for Cold Data Compression(sec): 4 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0%
こちらも特に変わりありません。
データにはアクセスしていないつもりですが、知らない内にアクセスしてしまったのでしょうか。
奥の手として-inactive-days 0
を指定して、全てのデータを圧縮するようにしてみます。
::*> volume efficiency inactive-data-compression start -volume vol2_dst -inactive-days 0 Inactive data compression scan started on volume "vol2_dst" in Vserver "svm2" ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 10018 Phase1 Lns Skipped: L1: 5 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 320080 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 237832 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 151 Time since Last Inactive Data Compression Scan ended(sec): 146 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 146 Average time for Cold Data Compression(sec): 4 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 1% Phase1 L1s Processed: 52779 Phase1 Lns Skipped: L1: 30 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 5543600 Phase2 Blocks Processed: 42974 Number of Cold Blocks Encountered: 438848 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 313704 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 206 Time since Last Inactive Data Compression Scan ended(sec): 201 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 201 Average time for Cold Data Compression(sec): 4 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol2_dst Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm2 vol2_dst true default RUNNING SUCCESS lzopro ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 14% Phase1 L1s Processed: 52779 Phase1 Lns Skipped: L1: 30 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 5543600 Phase2 Blocks Processed: 765596 Number of Cold Blocks Encountered: 1120672 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 924704 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 256 Time since Last Inactive Data Compression Scan ended(sec): 251 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 251 Average time for Cold Data Compression(sec): 4 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 77% Phase1 L1s Processed: 52779 Phase1 Lns Skipped: L1: 30 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 5543600 Phase2 Blocks Processed: 4276736 Number of Cold Blocks Encountered: 3731712 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 3325608 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 341 Time since Last Inactive Data Compression Scan ended(sec): 336 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 336 Average time for Cold Data Compression(sec): 4 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 3731816 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 3325616 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 204 Time since Last Inactive Data Compression Scan ended(sec): 27 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 27 Average time for Cold Data Compression(sec): 29 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 6%
Number of Compression Done Blocks
が3325616
とかなり量のデータブロックを圧縮することができました。
aggregateやボリュームを確認して、どのぐらい圧縮によってデータ量が削減されたのか確認してみます。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 813.2GB 6% online 4 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 48.19GB 5% Aggregate Metadata 7.62GB 1% Snapshot Reserve 45.36GB 5% Total Used 93.87GB 10% Total Physical Used 51.03GB 6% Total Provisioned Space 54.30GB 6% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol2_dst Vserver : svm2 Volume : vol2_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 14.85GB 2% Footprint in Performance Tier 15.01GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 108.3MB 0% Deduplication Metadata 6.95MB 0% Deduplication 6.95MB 0% Delayed Frees 156.4MB 0% File Operation Metadata 4KB 0% Total Footprint 15.12GB 2% Footprint Data Reduction 7.94GB 1% Auto Adaptive Compression 7.94GB 1% Effective Total Footprint 7.18GB 1% ::*> volume show-footprint -volume vol2_dst -instance Vserver: svm2 Volume Name: vol2_dst Volume MSID: 2157478066 Volume DSID: 1028 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 6.95MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 14.85GB Volume Data Footprint Percent: 2% Flexible Volume Metadata Footprint: 108.3MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 156.4MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 15.12GB Total Footprint Percent: 2% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 15.01GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 6.95MB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: 7.94GB Footprint Data Reduction by Auto Adaptive Compression Percent: 1% Total Footprint Data Reduction: 7.94GB Total Footprint Data Reduction Percent: 1% Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 7.18GB Effective Total after Footprint Data Reduction Percent: 1% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 8.00:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 8.30:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 8.30:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 297.9GB Total Physical Used: 37.24GB Total Storage Efficiency Ratio: 8.00:1 Total Data Reduction Logical Used Without Snapshots: 114.7GB Total Data Reduction Physical Used Without Snapshots: 13.81GB Total Data Reduction Efficiency Ratio Without Snapshots: 8.30:1 Total Data Reduction Logical Used without snapshots and flexclones: 114.7GB Total Data Reduction Physical Used without snapshots and flexclones: 13.81GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 8.30:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 300.4GB Total Physical Used in FabricPool Performance Tier: 39.92GB Total FabricPool Performance Tier Storage Efficiency Ratio: 7.52:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 117.1GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 16.50GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 7.10:1 Logical Space Used for All Volumes: 114.7GB Physical Space Used for All Volumes: 15.60GB Space Saved by Volume Deduplication: 99.05GB Space Saved by Volume Deduplication and pattern detection: 99.08GB Volume Deduplication Savings ratio: 7.35:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 36.78MB Volume Data Reduction SE Ratio: 7.35:1 Logical Space Used by the Aggregate: 44.53GB Physical Space Used by the Aggregate: 37.24GB Space Saved by Aggregate Data Reduction: 7.29GB Aggregate Data Reduction SE Ratio: 1.20:1 Logical Size Used by Snapshot Copies: 183.2GB Physical Size Used by Snapshot Copies: 28.01GB Snapshot Volume Data Reduction Ratio: 6.54:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 6.54:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -volume vol2_dst -instance Vserver Name: svm2 Volume Name: vol2_dst Volume Path: /vol/vol2_dst State: Enabled Auto State: - Status: Idle Progress: Idle for 14:58:10 Type: Snapvault Schedule: - Efficiency Policy Name: - Efficiency Policy UUID: - Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Tue Nov 21 10:33:07 2023 Last Success Operation End: Tue Nov 21 10:49:39 2023 Last Operation Begin: Tue Nov 21 10:33:07 2023 Last Operation End: Tue Nov 21 10:49:39 2023 Last Operation Size: 29.63GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 316KB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 15.84GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 7766581 Blocks Processed For Compression: 0 Gathering Begin: Tue Nov 21 10:33:07 UTC 2023 Gathering Phase 2 Begin: Tue Nov 21 10:35:33 UTC 2023 Fingerprints Sorted: 7766581 Duplicate Blocks Found: 7464147 Sorting Begin: Tue Nov 21 10:36:18 UTC 2023 Blocks Deduplicated: 6754145 Blocks Snapshot Crunched: 0 De-duping Begin: Tue Nov 21 10:36:26 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7766581 Same FP Count: 7464147 Same FBN: 0 Same Data: 6754145 No Op: 0 Same VBN: 701822 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 8142 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 3731816 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 3325616 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 234 Time since Last Inactive Data Compression Scan ended(sec): 57 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 57 Average time for Cold Data Compression(sec): 29 Tuning Enabled: true Threshold: 1 Threshold Upper Limit: 1 Threshold Lower Limit: 1 Client Read history window: 14 Incompressible Data Percentage: 6% ::*> volume show -volume vol2_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol2_dst 17.48GB 0 2.63GB 17.48GB 16.61GB 13.98GB 84% 13.51GB 49% 13.51GB 49% 1.59GB 0B 0% 27.43GB 165% - 15.78GB 0B 0% ::*> snapshot show -volume vol2_dst ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol2_dst snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 960KB 0% 0% snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703 12.53GB 72% 84% 2 entries were displayed. ::*> snapshot show -volume vol2_dst -instance Vserver: svm2 Volume: vol2_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 Snapshot Data Set ID: 12884902916 Snapshot Master Data Set ID: 15042379954 Creation Time: Mon Nov 20 12:50:27 2023 Snapshot Busy: false List of Owners: - Snapshot Size: 960KB Percentage of Total Blocks: 0% Percentage of Used Blocks: 0% Consistency Point Count: 91 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 3 Logical Snap ID: 3 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror Instance UUID: 64454ce7-7d31-47d3-b1a7-f56d61f485df Version UUID: f2164c11-eff0-4582-935c-8431d20c24c0 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 14.43GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 1.29GB VBN Zero Savings from Snapshot: 35.86MB Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 15.76GB Performance Metadata from Snapshot: 456KB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false Vserver: svm2 Volume: vol2_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703 Snapshot Data Set ID: 21474837508 Snapshot Master Data Set ID: 23632314546 Creation Time: Mon Nov 20 12:57:03 2023 Snapshot Busy: true List of Owners: snapmirror Snapshot Size: 12.53GB Percentage of Total Blocks: 72% Percentage of Used Blocks: 84% Consistency Point Count: 110 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 5 Logical Snap ID: 5 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror Instance UUID: 93c89d51-467e-42d2-849a-0b6e9aa77371 Version UUID: e41071e5-69e7-42b3-bdc4-6279ed7ff522 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 14.43GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 1.29GB VBN Zero Savings from Snapshot: 35.86MB Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 15.76GB Performance Metadata from Snapshot: 180KB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false 2 entries were displayed.
volume show-footprint
から確認できるAuto Adaptive Compression
が7.94GBとなっていますね。ということでSnapMirrorの転送先ボリュームでInactive data compressionをかけるのは可能です。
なお、そのほかのコマンドで確認できるCompressionは軒並み0になっています。
以下のKBのとおりvolume show-footprint
で確認できることは知っていましたが、aggr show-efficiency
でも確認できないのは意外です。
SnapMirrorの転送先ボリュームを書き込みできるように修正
TSSEによる重複排除やInactive data compressionがSnapMirrorの転送先ボリュームで動作することを確認しました。
ただし、AFSで削減されたデータブロックをSnapshotがロックしているため、物理的な使用量は変わっていません。
そこで、Snapshotを削除してみて、確かに物理的な使用量が削減されることを確認してみます。
その下準備としてSnapMirrorの転送先ボリュームを書き込みできるように修正します。
::*> snapmirror quiesce -destination-path svm2:vol2_dst Operation succeeded: snapmirror quiesce for destination "svm2:vol2_dst". ::*> snapmirror break -destination-path svm2:vol2_dst Operation succeeded: snapmirror break for destination "svm2:vol2_dst". ::*> snapmirror show -destination-path svm2:vol2_dst Source Path: svm:vol2 Source Cluster: - Source Vserver: svm Source Volume: vol2 Destination Path: svm2:vol2_dst Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol2_dst Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Broken-off Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703 Newest Snapshot Timestamp: 11/20 12:57:03 Exported Snapshot: - Exported Snapshot Timestamp: - Healthy: true Relationship ID: 3f31c76e-879d-11ee-b677-8751a02f6bb7 Source Vserver UUID: 0d9b83f3-8520-11ee-84de-4b7ecb818153 Destination Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: update Last Transfer Error: - Last Transfer Error Codes: - Last Transfer Size: 3.27KB Last Transfer Network Compression Ratio: 1:1 Last Transfer Duration: 0:0:3 Last Transfer From: svm:vol2 Last Transfer End Timestamp: 11/20 12:57:06 Unhealthy Reason: - Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: - Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0648fddba7bd041af-01 Identity Preserve Vserver DR: - Number of Successful Updates: 3 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 1 Number of Failed Breaks: 0 Total Transfer Bytes: 16827724766 Total Transfer Time in Seconds: 219 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
SnapMirrorの転送先ボリュームを書き込みできるように修正した後、SnapMirrorの転送先ボリュームのaggregateやボリュームのサイズを確認しましたが、特に変わりありません。
ただし、Efficiency Policy Name
にautoが設定されているのと、Inactive data conmpressionがの閾値がデフォルトのものに変わっていることを確認できました。
::*> volume efficiency show -volume vol2_dst -instance Vserver Name: svm2 Volume Name: vol2_dst Volume Path: /vol/vol2_dst State: Enabled Auto State: Auto Status: Idle Progress: Idle for 15:27:23 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: c632cd30-8502-11ee-b677-8751a02f6bb7 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Tue Nov 21 10:33:07 2023 Last Success Operation End: Tue Nov 21 10:49:39 2023 Last Operation Begin: Tue Nov 21 10:33:07 2023 Last Operation End: Tue Nov 21 10:49:39 2023 Last Operation Size: 29.63GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 316KB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 15.77GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 7766581 Blocks Processed For Compression: 0 Gathering Begin: Tue Nov 21 10:33:07 UTC 2023 Gathering Phase 2 Begin: Tue Nov 21 10:35:33 UTC 2023 Fingerprints Sorted: 7766581 Duplicate Blocks Found: 7464147 Sorting Begin: Tue Nov 21 10:36:18 UTC 2023 Blocks Deduplicated: 6754145 Blocks Snapshot Crunched: 0 De-duping Begin: Tue Nov 21 10:36:26 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7766581 Same FP Count: 7464147 Same FBN: 0 Same Data: 6754145 No Op: 0 Same VBN: 701822 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 8142 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 3731816 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 3325616 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 1937 Time since Last Inactive Data Compression Scan ended(sec): 1760 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 1760 Average time for Cold Data Compression(sec): 29 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 6%
SnapMirrorの転送先ボリュームでInactive data compressionの閾値をカスタマイズしていた場合は要注意ですね。
転送先ボリュームをジャンクションパスにマウントします。
::*> volume mount -volume vol2_dst -junction-path /vol2_dst Queued private job: 62
マウント後、ボリューム内のディレクトリを確認します。
$ sudo mkdir -p /mnt/fsxn/vol2_dst $ sudo mount -t nfs svm-0b1f078290d27a316.fs-0648fddba7bd041af.fsx.us-east-1.amazonaws.com:/vol2_dst /mnt/fsxn/vol2_dst $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-0b1f078290d27a316.fs-0648fddba7bd041af.fsx.us-east-1.amazonaws.com:/vol2_dst nfs4 17G 14G 2.7G 84% /mnt/fsxn/vol2_dst $ ls -l /mnt/fsxn/vol2_dst total 812 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc1 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc10 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc11 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc12 . . (中略) . . drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr3 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr4 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr5 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr6 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr7 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr8 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr9
Snapshotの削除
それではSnapshotを削除します。
最新のSnapshotを削除します。
::*> snapshot delete -vserver svm2 -volume vol2_dst -snapshot snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703 Warning: Deleting a Snapshot copy permanently removes data that is stored only in that Snapshot copy. Are you sure you want to delete Snapshot copy "snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125703" for volume "vol2_dst" in Vserver "svm2" ? {y|n}: y
Snapshot削除後、aggregateやボリュームの情報を確認します。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 813.3GB 6% online 4 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 48.14GB 5% Aggregate Metadata 7.64GB 1% Snapshot Reserve 45.36GB 5% Total Used 93.85GB 10% Total Physical Used 47.92GB 5% Total Provisioned Space 54.30GB 6% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol2_dst Vserver : svm2 Volume : vol2_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 14.80GB 2% Footprint in Performance Tier 14.96GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 108.3MB 0% Deduplication Metadata 6.95MB 0% Deduplication 6.95MB 0% Delayed Frees 160.2MB 0% File Operation Metadata 4KB 0% Total Footprint 15.07GB 2% Footprint Data Reduction 7.91GB 1% Auto Adaptive Compression 7.91GB 1% Effective Total Footprint 7.16GB 1% ::*> volume show-footprint -volume vol2_dst -instance Vserver: svm2 Volume Name: vol2_dst Volume MSID: 2157478066 Volume DSID: 1028 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 6.95MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 14.80GB Volume Data Footprint Percent: 2% Flexible Volume Metadata Footprint: 108.3MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 160.2MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 15.07GB Total Footprint Percent: 2% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 14.96GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 6.95MB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: 7.91GB Footprint Data Reduction by Auto Adaptive Compression Percent: 1% Total Footprint Data Reduction: 7.91GB Total Footprint Data Reduction Percent: 1% Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 7.16GB Effective Total after Footprint Data Reduction Percent: 1% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 7.57:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 8.36:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 8.36:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 282.1GB Total Physical Used: 37.26GB Total Storage Efficiency Ratio: 7.57:1 Total Data Reduction Logical Used Without Snapshots: 114.6GB Total Data Reduction Physical Used Without Snapshots: 13.71GB Total Data Reduction Efficiency Ratio Without Snapshots: 8.36:1 Total Data Reduction Logical Used without snapshots and flexclones: 114.6GB Total Data Reduction Physical Used without snapshots and flexclones: 13.71GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 8.36:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 284.5GB Total Physical Used in FabricPool Performance Tier: 39.94GB Total FabricPool Performance Tier Storage Efficiency Ratio: 7.12:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 117.1GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 16.40GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 7.14:1 Logical Space Used for All Volumes: 114.6GB Physical Space Used for All Volumes: 15.41GB Space Saved by Volume Deduplication: 99.17GB Space Saved by Volume Deduplication and pattern detection: 99.21GB Volume Deduplication Savings ratio: 7.44:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 36.33MB Volume Data Reduction SE Ratio: 7.44:1 Logical Space Used by the Aggregate: 44.55GB Physical Space Used by the Aggregate: 37.26GB Space Saved by Aggregate Data Reduction: 7.29GB Aggregate Data Reduction SE Ratio: 1.20:1 Logical Size Used by Snapshot Copies: 167.5GB Physical Size Used by Snapshot Copies: 28.15GB Snapshot Volume Data Reduction Ratio: 5.95:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 5.95:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -volume vol2_dst -instance Vserver Name: svm2 Volume Name: vol2_dst Volume Path: /vol/vol2_dst State: Enabled Auto State: Auto Status: Idle Progress: Idle for 15:39:03 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: c632cd30-8502-11ee-b677-8751a02f6bb7 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Tue Nov 21 10:33:07 2023 Last Success Operation End: Tue Nov 21 10:49:39 2023 Last Operation Begin: Tue Nov 21 10:33:07 2023 Last Operation End: Tue Nov 21 10:49:39 2023 Last Operation Size: 29.63GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 316KB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 15.77GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 7766581 Blocks Processed For Compression: 0 Gathering Begin: Tue Nov 21 10:33:07 UTC 2023 Gathering Phase 2 Begin: Tue Nov 21 10:35:33 UTC 2023 Fingerprints Sorted: 7766581 Duplicate Blocks Found: 7464147 Sorting Begin: Tue Nov 21 10:36:18 UTC 2023 Blocks Deduplicated: 6754145 Blocks Snapshot Crunched: 0 De-duping Begin: Tue Nov 21 10:36:26 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7766581 Same FP Count: 7464147 Same FBN: 0 Same Data: 6754145 No Op: 0 Same VBN: 701822 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 8142 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 3731816 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 3325616 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 2616 Time since Last Inactive Data Compression Scan ended(sec): 2439 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 2439 Average time for Cold Data Compression(sec): 29 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 6% ::*> volume show -volume vol2_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------- ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol2_dst 17.48GB 0 2.68GB 17.48GB 16.61GB 13.93GB 83% 13.64GB 49% 13.64GB 49% 1.43GB 0B 0% 27.57GB 166% - 15.77GB 0B 0% ::*> snapshot show -volume vol2_dst ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol2_dst snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 12.67GB 72% 86% ::*> snapshot show -volume vol2_dst -instance Vserver: svm2 Volume: vol2_dst Snapshot: snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 Snapshot Data Set ID: 12884902916 Snapshot Master Data Set ID: 15042379954 Creation Time: Mon Nov 20 12:50:27 2023 Snapshot Busy: false List of Owners: - Snapshot Size: 12.67GB Percentage of Total Blocks: 72% Percentage of Used Blocks: 86% Consistency Point Count: 91 Comment: - File System Version: 9.13 File System Block Format: 64-bit Physical Snap ID: 3 Logical Snap ID: 3 Database Record Owner: - Snapshot Tags: SMCreated=snapmirror Instance UUID: 64454ce7-7d31-47d3-b1a7-f56d61f485df Version UUID: f2164c11-eff0-4582-935c-8431d20c24c0 7-Mode Snapshot: false Label for SnapMirror Operations: - Snapshot State: - Constituent Snapshot: false Node: FsxId0648fddba7bd041af-01 AFS Size from Snapshot: 14.43GB Compression Savings from Snapshot: 0B Dedup Savings from Snapshot: 1.29GB VBN Zero Savings from Snapshot: 35.86MB Reserved (holes and overwrites) in Snapshot: 0B Snapshot Logical Used: 15.76GB Performance Metadata from Snapshot: 456KB Snapshot Inofile Version: 4 Expiry Time: - Compression Type: none SnapLock Expiry Time: - Application IO Size: - Is Qtree Caching Support Enabled: false Compression Algorithm: lzopro Snapshot Created for Conversion: false
重複排除により削減したデータ量が13.51GBから13.64GBと、わずかに増えました。
最新のSnapshot削除後の状態をNFSクライアントからも確認します。
$ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-0b1f078290d27a316.fs-0648fddba7bd041af.fsx.us-east-1.amazonaws.com:/vol2_dst nfs4 17G 14G 2.7G 84% /mnt/fsxn/vol2_dst $ ls -l /mnt/fsxn/vol2_dst total 812 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc1 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc10 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc11 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc12 . . (中略) . . drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr4 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr5 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr6 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr7 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr8 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr9
こちらは特に変わりありません。ボリューム内のディレクトリも全て残っています。
それでは、最後のSnapshotを削除します。
::*> snapshot delete -vserver svm2 -volume vol2_dst -snapshot snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027 Warning: Deleting a Snapshot copy permanently removes data that is stored only in that Snapshot copy. Are you sure you want to delete Snapshot copy "snapmirror.c5f8f20e-8502-11ee-adbe-13bc02fe3110_2157478066.2023-11-20_125027" for volume "vol2_dst" in Vserver "svm2" ? {y|n}: y
Snapshot削除後のaggregateやボリュームの状態を確認します。
::*> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr1 861.8GB 824.6GB 4% online 4 FsxId0648fddba7b raid0, d041af-01 mirrored, normal ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 36.64GB 4% Aggregate Metadata 3.57GB 0% Snapshot Reserve 45.36GB 5% Total Used 82.52GB 9% Total Physical Used 43.39GB 5% Total Provisioned Space 52.56GB 6% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> volume show-footprint -volume vol2_dst Vserver : svm2 Volume : vol2_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 2.13GB 0% Footprint in Performance Tier 3.46GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 108.3MB 0% Deduplication Metadata 6.95MB 0% Deduplication 6.95MB 0% Delayed Frees 1.32GB 0% File Operation Metadata 4KB 0% Total Footprint 3.57GB 0% Footprint Data Reduction 1.83GB 0% Auto Adaptive Compression 1.83GB 0% Effective Total Footprint 1.74GB 0% ::*> volume show-footprint -volume vol2_dst -instance Vserver: svm2 Volume Name: vol2_dst Volume MSID: 2157478066 Volume DSID: 1028 Vserver UUID: c5f8f20e-8502-11ee-adbe-13bc02fe3110 Aggregate Name: aggr1 Aggregate UUID: ed465829-8501-11ee-adbe-13bc02fe3110 Hostname: FsxId0648fddba7bd041af-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: 6.95MB Deduplication Footprint Percent: 0% Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 2.13GB Volume Data Footprint Percent: 0% Flexible Volume Metadata Footprint: 108.3MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 1.32GB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 3.57GB Total Footprint Percent: 0% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 3.46GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: 6.95MB Total Deduplication Footprint Percent: 0% Footprint Data Reduction by Auto Adaptive Compression: 1.83GB Footprint Data Reduction by Auto Adaptive Compression Percent: 0% Total Footprint Data Reduction: 1.83GB Total Footprint Data Reduction Percent: 0% Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 1.74GB Effective Total after Footprint Data Reduction Percent: 0% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency Aggregate: aggr1 Node: FsxId0648fddba7bd041af-01 Total Storage Efficiency Ratio: 7.54:1 Total Data Reduction Efficiency Ratio w/o Snapshots: 5.44:1 Total Data Reduction Efficiency Ratio w/o Snapshots & FlexClones: 5.44:1 ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0648fddba7bd041af-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 266.3GB Total Physical Used: 35.34GB Total Storage Efficiency Ratio: 7.54:1 Total Data Reduction Logical Used Without Snapshots: 114.6GB Total Data Reduction Physical Used Without Snapshots: 21.09GB Total Data Reduction Efficiency Ratio Without Snapshots: 5.44:1 Total Data Reduction Logical Used without snapshots and flexclones: 114.6GB Total Data Reduction Physical Used without snapshots and flexclones: 21.09GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 5.44:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 268.8GB Total Physical Used in FabricPool Performance Tier: 38.03GB Total FabricPool Performance Tier Storage Efficiency Ratio: 7.07:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 117.1GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 23.78GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 4.92:1 Logical Space Used for All Volumes: 114.6GB Physical Space Used for All Volumes: 15.41GB Space Saved by Volume Deduplication: 99.17GB Space Saved by Volume Deduplication and pattern detection: 99.21GB Volume Deduplication Savings ratio: 7.44:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 36.33MB Volume Data Reduction SE Ratio: 7.44:1 Logical Space Used by the Aggregate: 38.39GB Physical Space Used by the Aggregate: 35.34GB Space Saved by Aggregate Data Reduction: 3.05GB Aggregate Data Reduction SE Ratio: 1.09:1 Logical Size Used by Snapshot Copies: 151.7GB Physical Size Used by Snapshot Copies: 15.48GB Snapshot Volume Data Reduction Ratio: 9.80:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 9.80:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> volume efficiency show -volume vol2_dst -instance Vserver Name: svm2 Volume Name: vol2_dst Volume Path: /vol/vol2_dst State: Enabled Auto State: Auto Status: Idle Progress: Idle for 15:48:04 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: c632cd30-8502-11ee-b677-8751a02f6bb7 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Tue Nov 21 10:33:07 2023 Last Success Operation End: Tue Nov 21 10:49:39 2023 Last Operation Begin: Tue Nov 21 10:33:07 2023 Last Operation End: Tue Nov 21 10:49:39 2023 Last Operation Size: 29.63GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 316KB Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 15.77GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 7766581 Blocks Processed For Compression: 0 Gathering Begin: Tue Nov 21 10:33:07 UTC 2023 Gathering Phase 2 Begin: Tue Nov 21 10:35:33 UTC 2023 Fingerprints Sorted: 7766581 Duplicate Blocks Found: 7464147 Sorting Begin: Tue Nov 21 10:36:18 UTC 2023 Blocks Deduplicated: 6754145 Blocks Snapshot Crunched: 0 De-duping Begin: Tue Nov 21 10:36:26 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 7766581 Same FP Count: 7464147 Same FBN: 0 Same Data: 6754145 No Op: 0 Same VBN: 701822 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 8142 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: true Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show -volume vol2_dst -instance Volume: vol2_dst Vserver: svm2 Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 3731816 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 3325616 Number of Vol-Overwrites: 2649537 Time since Last Inactive Data Compression Scan started(sec): 3141 Time since Last Inactive Data Compression Scan ended(sec): 2964 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 2964 Average time for Cold Data Compression(sec): 29 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 6% ::*> volume show -volume vol2_dst -fields available, filesystem-size, total, user, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, compression-space-saved, compression-space-saved-percent, logical-used, logical-used-percent, logical-used-by-afs, logical-available vserver volume size user available filesystem-size total used percent-used sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared compression-space-saved compression-space-saved-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------- ---- --------- --------------- ------- ------ ------------ --------------- ----------------------- ------------------ -------------------------- ------------------- ----------------------- ------------------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol2_dst 15.74GB 0 12.82GB 15.74GB 14.96GB 2.13GB 14% 13.64GB 86% 13.64GB 86% 1.43GB 0B 0% 15.77GB 105% - 15.77GB 0B 0% ::*> snapshot show -volume vol2_dst There are no entries matching your query.
aggregateの空き容量が813.3GBから824.6GBへと大幅に増えています。TSSEを実行前のSnapshotがある場合、Snapshotを削除すれば物理的な空き容量を確保することが確認できました。
残りのsnapshot 削除後の状態をNFSクライアントからも確認します。
$ ls -l /mnt/fsxn/vol2_dst total 812 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc1 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc10 drwxr-xr-x. 77 root root 16384 Nov 6 00:43 etc11 . . (中略) . . drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr3 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr4 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr5 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr6 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr7 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr8 drwxr-xr-x. 12 root root 4096 Oct 2 16:30 usr9 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-0b1f078290d27a316.fs-0648fddba7bd041af.fsx.us-east-1.amazonaws.com:/vol2_dst nfs4 15G 2.2G 13G 15% /mnt/fsxn/vol2_dst
ボリューム内のディレクトリは変わりありませんが、使用量が14GBから2.2GBに減っていますね。
SnapMirrorをする前にTSSEを効かせることが望ましい
Amazon FSx for NetApp ONTAPにおけるSnapMirrorのTemperature Sensitive Storage Efficiency (TSSE) の挙動を確認してみました。
SnapMirrorの転送先で追加のTSSEが効くことを確認できました。ただし、Snapshotがデータブロックをロックしている関係上、TSSEによる物理的なデータ削減効果を得るには一手間費用です。
本来であれば、SnapMirrorをする前に転送元でTSSEを効かせることが望ましいです。まずはそちらができないか検討すると良いでしょう。
また、SnapMirrorのStorage Efficiencyの関係性については以下KBによくまとまっています。こちらについてもご覧ください。
この記事が誰かの助けになれば幸いです。
以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!